content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Showing and Hiding the Palette List You can show or hide the palette list in the Colour view. - In the Colour view, click the Show/Hide Palette List View button to expand or collapse the Palette List area. The Palette List window opens and displays all your palettes.
https://docs.toonboom.com/help/harmony-14/essentials/colour/show-hide-palette-list.html
2018-12-10T06:11:11
CC-MAIN-2018-51
1544376823318.33
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Trad_Anim/004_Colour/HAR11_display_palette.png', None], dtype=object) ]
docs.toonboom.com
cubicle with doors office sign acrylic name plate wall and holder cleaning shower. Related Post Nameplates For Cubicles Wood Gun Cabinets Rifle Cabinet Glass Gun Cabinet Electronic Dart Board Cabinet Glass Gun Cabinets Electronic Dartboard With Cabinet Dart Board Cabinet Fire Safe File Cabinet Sauder Storage Cabinet White Office Floor Mats For Carpet Wood Gun Cabinet Grand Office Supply Gun Display Cabinet Dartboard Cabinet
http://top-docs.co/cubicle-with-doors/cubicle-with-doors-office-sign-acrylic-name-plate-wall-and-holder-cleaning-shower/
2018-12-10T06:51:11
CC-MAIN-2018-51
1544376823318.33
[array(['http://top-docs.co/wp-content/uploads/2018/06/cubicle-with-doors-office-sign-acrylic-name-plate-wall-and-holder-cleaning-shower.jpg', 'cubicle with doors office sign acrylic name plate wall and holder cleaning shower cubicle with doors office sign acrylic name plate wall and holder cleaning shower'], dtype=object) ]
top-docs.co
default_filtersArgument nFilter As described in the chapter Syntax, the “ |” operator can be applied to a “ ${}” expression to apply escape filters to the output: ${"this is some text" | u} The above expression applies URL escaping to the expression, and produces this+is+some+text. The built-in escape flags are: u : URL escaping, provided by urllib.quote_plus(string.encode('utf-8')) h : HTML escaping, provided by markupsafe.escape(string) New in version 0.3.4: Prior versions use cgi.escape(string, True). x : XML escaping trim : whitespace trimming, provided by string.strip() entity : produces HTML entity references for applicable strings, derived from htmlentitydefs unicode ( str on Python 3): produces a Python unicode string (this function is applied by default) decode.<some encoding>: decode input into a Python unicode with the specified encoding n : disable all default filtering; only filters specified in the local expression tag will be applied. To apply more than one filter, separate them by a comma: ${" <tag>some value</tag> " | h,trim} The above produces <tag>some value</tag>, with no leading or trailing whitespace. The HTML escaping function is applied first, the “trim” function second. Naturally, you can make your own filters too. A filter is just a Python function that accepts a single string argument, and returns the filtered result. The expressions after the | operator draw upon the local namespace of the template in which they appear, meaning you can define escaping functions locally: <%! def myescape(text): return "<TAG>" + text + "</TAG>" %> Here's some tagged text: ${"text" | myescape} Or from any Python module: <%! import myfilters %> Here's some tagged text: ${"text" | myfilters.tagfilter} A page can apply a default set of filters to all expression tags using the expression_filter argument to the %page tag: <%page Escaped text: ${"<html>some html</html>"} Result: Escaped text: <html>some html</html> default_filtersArgument¶ In addition to the expression_filter argument, the default_filters argument to both Template and TemplateLookup can specify filtering for all expression tags at the programmatic level. This array-based argument, when given its default argument of None, will be internally set to ["unicode"] (or ["str"] on Python 3), except when disable_unicode=True is set in which case it defaults to ["str"]: t = TemplateLookup(directories=['/tmp'], default_filters=['unicode']) To replace the usual unicode/ str function with a specific encoding, the decode filter can be substituted: t = TemplateLookup(directories=['/tmp'], default_filters=['decode.utf8']) To disable default_filters entirely, set it to an empty list: t = TemplateLookup(directories=['/tmp'], default_filters=[]) Any string name can be added to default_filters where it will be added to all expressions as a filter. The filters are applied from left to right, meaning the leftmost filter is applied first. t = Template(templatetext, default_filters=['unicode', 'myfilter']) To ease the usage of default_filters with custom filters, you can also add imports (or other code) to all templates using the imports argument: t = TemplateLookup(directories=['/tmp'], default_filters=['unicode', 'myfilter'], imports=['from mypackage import myfilter']) The above will generate templates something like this: # .... from mypackage import myfilter def render_body(context): context.write(myfilter(unicode("some text"))) nFilter¶ In all cases the special n filter, used locally within an expression, will disable all filters declared in the <%page> tag as well as in default_filters. Such as: ${'myexpression' | n} will render myexpression with no filtering of any kind, and: ${'myexpression' | n,trim} will render myexpression using the trim filter only. The %def and %block tags have an argument called filter which will apply the given list of filter functions to the output of the %def: <%def <b>this is bold</b> </%def> When the filter attribute is applied to a def as above, the def is automatically buffered as well. This is described next. One of Mako’s central design goals is speed. To this end, all of the textual content within a template and its various callables is by default piped directly to the single buffer that is stored within the Context object. While this normally is easy to miss, it has certain side effects. The main one is that when you call a def using the normal expression syntax, i.e. ${somedef()}, it may appear that the return value of the function is the content it produced, which is then delivered to your template just like any other expression substitution, except that normally, this is not the case; the return value of ${somedef()} is simply the empty string ''. By the time you receive this empty string, the output of somedef() has been sent to the underlying buffer. You may not want this effect, if for example you are doing something like this: ${" results " + somedef() + " more results "} If the somedef() function produced the content “ somedef's results”, the above template would produce this output: somedef's results results more results This is because somedef() fully executes before the expression returns the results of its concatenation; the concatenation in turn receives just the empty string as its middle expression. Mako provides two ways to work around this. One is by applying buffering to the %def itself: <%def somedef's results </%def> The above definition will generate code similar to this: def somedef(): context.push_buffer() try: context.write("somedef's results") finally: buf = context.pop_buffer() return buf.getvalue() So that the content of somedef() is sent to a second buffer, which is then popped off the stack and its value returned. The speed hit inherent in buffering the output of a def is also apparent. Note that the filter argument on %def also causes the def to be buffered. This is so that the final content of the %def can be delivered to the escaping function in one batch, which reduces method calls and also produces more deterministic behavior for the filtering function itself, which can possibly be useful for a filtering function that wishes to apply a transformation to the text as a whole. The other way to buffer the output of a def or any Mako callable is by using the built-in capture function. This function performs an operation similar to the above buffering operation except it is specified by the caller. ${" results " + capture(somedef) + " more results "} Note that the first argument to the capture function is the function itself, not the result of calling it. This is because the capture function takes over the job of actually calling the target function, after setting up a buffered environment. To send arguments to the function, just send them to capture instead: ${capture(somedef, 17, 'hi', use_paging=True)} The above call is equivalent to the unbuffered call: ${somedef(17, 'hi', use_paging=True)} New in version 0.2.5. Somewhat like a filter for a %def but more flexible, the decorator argument to %def allows the creation of a function that will work in a similar manner to a Python decorator. The function can control whether or not the function executes. The original intent of this function is to allow the creation of custom cache logic, but there may be other uses as well. decorator is intended to be used with a regular Python function, such as one defined in a library module. Here we’ll illustrate the python function defined in the template for simplicities’ sake: <%! def bar(fn): def decorate(context, *args, **kw): context.write("BAR") fn(*args, **kw) context.write("BAR") return '' return decorate %> <%def this is foo </%def> ${foo()} The above template will return, with more whitespace than this, "BAR this is foo BAR". The function is the render callable itself (or possibly a wrapper around it), and by default will write to the context. To capture its output, use the capture() callable in the mako.runtime module (available in templates as just runtime): <%! def bar(fn): def decorate(context, *args, **kw): return "BAR" + runtime.capture(context, fn, *args, **kw) + "BAR" return decorate %> <%def this is foo </%def> ${foo()} The decorator can be used with top-level defs as well as nested defs, and blocks too. Note that when calling a top-level def from the Template API, i.e. template.get_def('somedef').render(), the decorator has to write the output to the context, i.e. as in the first example. The return value gets discarded.
https://docs.makotemplates.org/en/latest/filtering.html
2018-12-10T07:45:55
CC-MAIN-2018-51
1544376823318.33
[]
docs.makotemplates.org
CollectionMapping /*); CollectionMapping dataMapping = new CollectionMapping(dataCollection); JSONObject mapping = new JSONObject(); JSONObject type = new JSONObject(); type.put("type", "string"); mapping.put("foo", type); CollectionMapping dataMapping = new CollectionMapping(dataCollection, mapping); use \Kuzzle\Kuzzle; use \Kuzzle\DataMapping; $mapping = [ 'someField' => [ 'type' => 'string', 'index' => 'analyzed' ] ]; $kuzzle = new Kuzzle('localhost'); $dataCollection = $kuzzle->collection('collection', 'index'); $dataMapping = $dataCollection->collectionMapping($mapping); // $dataMapping instanceof DataMapping When creating a new data data collection and to modify it if necessary. CollectionMapping(Collection, [mapping]) Properties Note: the headers property is inherited from the provided Collection object and can be overrided
https://docs.kuzzle.io/sdk-reference/collection-mapping/
2018-11-12T19:53:26
CC-MAIN-2018-47
1542039741087.23
[]
docs.kuzzle.io
signal — Set handlers for asynchronous events¶ This module provides mechanisms to use signal handlers in Python. General rules¶ The signal.signal() function allows defining.. Signals and threads¶. Module contents¶ Modifié dans la. SIG* All the signal numbers are defined symbolically. For example, the hangup signal is defined as signal.SIGHUP; the variable names are identical to the names used in C programs, as found. signal. CTRL_C_EVENT¶ The signal corresponding to the Ctrl+C keystroke event. This signal can only be used with os.kill(). Availability: Windows. Nouveau dans la version 3.2. signal. CTRL_BREAK_EVENT¶ The signal corresponding to the Ctrl+Break keystroke event. This signal can only be used with os.kill(). Availability: Windows. Nouveau dans la version. Nouveau dans la version 3.3. signal. SIG_UNBLOCK¶ A possible value for the how parameter to pthread_sigmask()indicating that signals are to be unblocked. Nouveau dans la version 3.3. signal. SIG_SETMASK¶ A possible value for the how parameter to pthread_sigmask()indicating that the signal mask is to be replaced. Nouveau dans la version 3.3. The signal module defines one exception: - exception signal. ItimerError¶: signal. alarm(time)¶ If time is non-zero, this function requests that a SIGALRMsignal.. pause()¶ Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Not on Windows. (See the Unix man page signal(2).) See also sigwait(), sigwaitinfo(), sigtimedwait()and sigpending(). signal. pthread_kill(thread_id, signalnum)¶ Send the signal signalnumattribute of threading.Threadobjects to get a suitable value for thread_id. If signalnum is 0, then no signal is sent, but error checking is still performed; this can be used to check if the target thread is still running. Availability: Unix (see the man page pthread_kill(3) for further information). Nouveau dans la.(). Nouveau dans la. Availability: Unix. signal. getitimer(which)¶ Returns current value of a given interval timer specified by which. Availability:. Modifié dans la version 3.5: On Windows, the function now also supports socket handles. Modifié dans la version_IGNorEr(). Nouveau dans la version 3.3. signal. sigwait(sigset)¶(). Nouveau dans la version(). Nouveau dans la(). Nouveau dans la version 3.3. Exemple also whenever any socket connection is interrupted while your program is still writing to it.
https://docs.python.org/fr/3/library/signal.html
2018-11-12T20:58:58
CC-MAIN-2018-47
1542039741087.23
[]
docs.python.org
Release Notes for GlusterFS 3.7.0 Major Changes and Features Documentation about major changes and features is included in the doc/features/ directory of GlusterFS repository. Geo Replication Many improvements have gone in the geo replication. A detailed documentation about all the improvements can be found here Bitrot Detection Bitrot detection is a technique used to identify an “insidious” type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. For more information, refer here. Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. For more information refer here. Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. For more information refer here. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. For more information refer here. Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. For more information refer here. This feature has been utilized to add support for inode quotas. For more details about inode quotas, refer here. Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. For more information refer here. GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. For more information refer here. Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. For more information refer here. NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. For more information refer the below links: - NFS Ganesha Integration - Upcall Infrastructure - Gluster CLI for NFS Ganesha - High Availability for NFS Ganesha - pNFS support for Gluster pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. For more information, see here. Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. For more information, see here. Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. For more information, see here. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. For more information, see here. Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. For more information, see the 'Resolution of split-brain from the mount point' section here. Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname>added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures. Known Issues - Enabling Bitrot on volumes with more than 2 bricks on a node is known to cause problems. - Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported. The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly: ~~~ gluster volume set server.allow-insecure on ~~~ Edit /etc/glusterfs/glusterd.volto contain this line: option rpc-auth-allow-insecure on Post 1, restarting the volume would be necessary: ~~~ gluster volume stop gluster volume start ~~~ Post 2, restarting glusterd would be necessary: ~~~ service glusterd restart ~~~ or ~~~ systemctl restart glusterd ~~~ Upgrading to 3.7.0 Instructions for upgrading from previous versions of GlusterFS are maintained on this page.
https://gluster.readthedocs.io/en/latest/release-notes/3.7.0/
2018-11-12T21:02:08
CC-MAIN-2018-47
1542039741087.23
[]
gluster.readthedocs.io
Table of Contents You are creating/updating records when you work on Evergreen. Reporting means you extract some of these records from the database that meet your requirements. Understanding the Evergreen database and how records are created/updated when tasks are performed on the staff client, will help you when you create templates and set up reports on the Reports interface. There are various kinds of data used by Evergreen, such as patron’s names, address, barcode, item’s barcode, shelving location, status, price; checkout date, returned date, fines and bills and so on. This data must be organized in an efficient and effective way to make sure they can be stored and retrieved easily. Evergreen uses various tables to keep each type of records. You can visualize a table as an MS Excel Worksheet: a specified number of columns with unlimited number of rows. Each column is called a field in the database terminology and each row is a record. There are many tables in Evergreen database. Each contains a certain type of records. The fields in a record you see on the Staff Client may be from more than one tables. For example, in a patron record, you can find patron’s names, address, phone number, barcode, profile, etc. all in one record. But in the database, patron’s address, barcode, and profile are in separate tables. You do not need to know where these fields are from when editing a patron record on the Staff Client, but you have to know it when creating a template on the Reports interface. Since various information about one patron is saved in separate tables, there must be a mechanism of matching the information about one patron correctly to make sure all information is about the same patron. This is done via recording the patron id (a unique number in the main patron record) in every related table. So via recording the id of a record in another table, two tables are connected. The connections among many tables are pre-made by the Reports interface. You just need to follow the link to find the data saved in the related table. Below is a simplified diagram showing the connections among some commonly used tables/views on the Reports interface, which can be a guide for you to find various fields in different tables. Some explanation of these tables is after the diagram. ILS User (aka Patron or User): contains patron records. A patron’s name, phone number, email address, and registration date can all be found in this table. Follow the links to the table Current Library Card to find a patron’s current barcode, Circulation to find the circulation history, Home Library, Mailing Address, Physical Address, and Main Profile Group, etc. to find more information about the patron. Item (aka Circulating Item): contains copy records. Item’s barcode, creation date, active date, last edited date, last copy status change date and price are in this table. For related information like call number, circulating library, circ modifier, status, shelving location, etc., you need to follow the links to the respective table to find them. For title information you need to follow the Call Number table to the Bibliographic Record table to find it. Follow the link to the Circulation table to find an item’s circulation history. Pre-catalogued item information such as dummy ISBN, title and author are also in this table. When a pre-catalogued item is checked out, an item record is created. If the barcode is already in the table and the item is not marked deleted, the item record will be updated with the new title, author, etc. Bibliographic Record. Contains title information. To find the basic bibliographic information such as title, author, ISBN, etc., follow the link to Simple Records Extract. Circulation. Contains circulation records, including pre-catalogued item circulations. When an item is checked out, a circulation record is created. When an item is renewed, the existing circulation record is closed and another record is created. Below are some important timestamps in this table. CheckIn Date/Time: the effective date when the item is treated as checked-in CheckIn Scan Date/Time: the time when the check in action is taken Due Date/Time: For all daily loans the due time is 23:59:59 of the day in Pacific Time. Hourly loans have specific time with time zone information. Fine Stops Date/Time: the date when the Maximum Fine limit has been reached, or the item is returned, marked lost or claimed returned. After this date, the fine generator will not create new overdue fines for this circulation. Record Creation Date/Time: the date and time when the circulation record is created. For online checkout it is the same as Checkout Date/Time. For offline checkout, this date is the offline transaction processing date. Transaction Finish Date/Time: the date when the bills linked to this checkout have been resolved. For a regular checkout without bills this field is filled with the checkin time when the item is returned. The circulating_library field in this table refers to the checkout location. The circulating_library in the Item table refers to the item’s owning library. Non-catalogued Circulation. When a non-catalogued checkout is recorded, a record is created in this table. Non-catalogued item category can be found in the linked Non-Cat Item Type table. In-house Use. Contains the catalogued item in-house use records. Non-catalogued In-house Use. Contains the non-catalogued item in-house use records. Follow the link to Item Type to find the non-catalogued item category. Copy Transit. When a copy is sent in transit, regardless of whether it is going back to its circulating library or going to fill a hold, a copy transit record is created in this table. Follow the link to Transit Copy to find the item information. Hold Transit. When a copy is sent in transit to fill a hold, a hold transit record is created in this table and the Copy Transit table. So this table contains a subset of records of the Copy Transit table. You may find hold information following the link to Hold Requiring Transit. Follow the link to Transit Copy to find the item information. Hold Request. When a hold is placed, a hold record is created in this table. You may find the hold receiver’s information in Hold User. Requesting User is the person who takes the placing hold action. It can be the hold receiver or a staff member. Generally if the Hold User is different from the Requesting User, this is a staff-placed hold. Hold Copy Map equals Eligible Copies. Copies that can be used to fill the hold are in this table. Target Object ID is shown as a link. But there is no linked table in the Source pane. The value in this field could be a bibliographic record id, a volume record id or a copy record id depending on the hold type. Timestamps in this table: Capture Date/Time: The time when the hold achieves hold shelf or hold-in-transit status. Fulfillment Date/Time: the time when the on-hold item is checked out. Hold Cancel Date/Time: the time when the hold is cancelled. Hold Expire Date/Time: This could be the date calculated based on your library’s default Hold Expire Interval or a selected date when placing the hold. Last Targeting Date/Time: The last time the hold targeting program checked for a target copy for the hold. It usually has the same time as the Hold Request Time. It is usually not useful for reporting, But it may serve as an indicator of whether the request time has been edited. Notify Time: when the email notice is sent out. Request Date/Time: Usually this is when the hold is placed. But it is editable on the staff client. So sometimes this may be the request time chosen by the staff. Shelf Expire Time: the date is calculated based on the Shelf Time and your library’s Default Hold Shelf Expire Interval. Shelf Time: when the hold achieves On Hold Shelf status. Thaw Date: the activation date for a suspended hold. Bills tables and views. Scroll down to the bottom of the Source list. Hover your mouse over All Available Sources. A new list will pop up to the right. Move your cursor to the list and scroll down to Billing Line Item. This table contains all the billing line items such as each day’s overdue fines and the grocery bills created manually. The records in this table are viewable on the Full Details screen on Bills in the staff client. Billable Transaction with Billing Location: this table contains the summary records of billings and payments. Most information in these records is displayed on Bills or Bills History screen. The records are updated when either the related billings or payments are updated. Transaction ID is the bill ID. It is also the circulation record ID for circulation bills. Transaction Start Time is the grocery bill creation time or circulation checkout time. Transaction Finish Time is when the bill is resolved. Payments tables and views. Payments: ALL contains all payment records. When a lump sum of payment is made on the staff client, one or more payment records are created depending on the number of bills it resolved or partially resolved. One bill may be resolved by multiple payments. Payments: Brick-and-mortar contains all payments made at the circulation desk. Payments: Desk: Cash/Check/Credit Card payment. Payments: Non-drawer Staff: Forgive/Work/Goods/Patron Credit payments.
http://docs.libraries.coop/sitka/_commonly_used_tables_and_views_in_evergreen_database.html
2018-11-12T20:16:30
CC-MAIN-2018-47
1542039741087.23
[]
docs.libraries.coop
Tagging and Assessment section describes the procedure for the Advanced eDiscovery Relevance Assessment module. Performing Assessment training and analysis In the Relevance > Track tab, click Assessment to start case assessment. For example purposes in this procedure, a sample assessment set of 500 files is created and the Tag tab is displayed, which contains the Tagging panel, displayed file content and other tagging options. Review each file in the sample, determine the file's relevance for each case issue, and tag the file using the Relevance (R), Not relevant (NR) and Skip buttons in the Tagging panel pane. Note Assessment requires 500 tagged files. If files are "skipped", you will receive more files to tag. After tagging all files in the sample, click Calculate. The Assessment current error margin and richness are calculated and displayed in the Relevance Track tab, with expanded details per issue, as shown below. More details about this dialog are described in the later section "Reviewing Assessments results". Tip By default, we recommend that you proceed to the default Next step when the Assessment progress indicator for the issue has completed, indicating that the assessment sample was reviewed and sufficient relevant files were tagged. > Otherwise, if you want to view the Track tab results and control the margin of error and the next step, click Modify adjacent to Next Step, select Continue assessment, and then click OK. Click Modify to the right of the Assessment check box to view and specify assessment parameters per issue. An Assessment level dialog for each issue is displayed, as shown in the following example: The following parameters for the issue are calculated and displayed in the Assessment level dialog: Target error margin for recall estimates: Based on this value, the estimated number of additional files necessary to review is calculated. The margin used for recall is greater than 75% and with a 95% confidence level. Additional assessment files required: Indicates how many more files are necessary if the current error margin's requirements have not been met. To adjust the current error margin and see the effect of different error margins (per issue): In the Select issue list, select an issue. In Target error margin for recall estimates, enter a new value. Click Update values to see the impact of the adjustments. Click Advanced in the Assessment level dialog to see the following additional parameters and details: Estimated richness: Estimated richness according to the current assessment results For assumed recall: By default, the target error margin applies to recall above 75%. Click Edit if you want to change this parameter and control the margin of error on a different range of recall values. Confidence level: By default, the recommended error margin for confidence is 95%. Click Edit if you want to change this parameter. Expected richness error margin: Given the updated values, this is the expected margin of error of the richness, after all additional assessment files are reviewed. Additional assessment files required: Given the updated values, the number of additional assessment files that need to be reviewed to reach the target. Total assessment files required: Given the updated values, total assessment files required for review. Expected number of relevant files in assessment: Given the updated values, the expected number of relevant files in the entire assessment after all additional assessment files are reviewed. Click Recalculate values, if parameters are changed. When you are done, if there is one issue, click OK to save the changes (or Next when there are multiple issues to review or modify and then Finish). When there are multiple issues, after all issues have been reviewed or adjusted, an Assessment level: summary dialog is displayed, as shown in the following example. Upon successful completion of assessment, proceed to the next stage in Relevance training. Reviewing assessment results After an Assessment sample is tagged, the assessment results are calculated and displayed in the Relevance Track tab. The following results are displayed in the expanded Track display: Assessment current error margin for recall estimates Estimated richness Additional assessment files required (for review) The Assessment current error margin is the error margin recommended by Advanced eDiscovery. The number displayed for the "Additional assessment files required" corresponds to that recommendation. The Assessment progress indicator shows the level of completion of the assessment, given the current error margin. When assessment is underway, the user will tag another assessment sample. When the assessment progress indicator shows assessment as complete, that means the assessment sample review was completed and sufficient relevant files were tagged. The expanded Track display shows the recommended next step, the assessment statistics, and access to detailed results. When richness is very low, the number of additional assessment files needed to reach a minimal number of relevant files to produce useful statistics is very high. Advanced eDiscovery will then recommend moving on to training. The assessment progress indicator will be shaded, and no statistics will be available. In the absence of statistically based stabilization, there will be results with a lower level of accuracy and confidence level. However, these results can be used to find relevant files when you do not need to know the percentage of relevant files found. Similarly, this status can be used to train issues with low richness, where Relevance scores can accelerate access to files relevant to a specific issue. Tip In the Relevance > Track tab, expanded issue display, the following viewing options are available: > The recommended next step, such as Next step: Tagging can be bypassed (per issue) by clicking the Modify button to its right, and then selecting an different step in the Next step. When the assessment progress indicator has not completed, assessment will be the next recommended option, to tag more assessment files and increase statistics accuracy. > You can change the error margin and assess its impact, by clicking Modify, and in the Assessment level dialog, changing the Target error margin for recall estimates, and clicking Update values. Also, in this dialog, you can view advanced options, by clicking Advanced. > You can view additional assessment level statistics and their impact by clicking View. In the displayed Detail results dialog, statistics are available per issue, when there are at least 500 tagged assessment files and at least 18 files are tagged as Relevant for the issue. See also Office 365 Advanced eDiscovery Understanding Assessment in Relevance Tagging and Relevance training Tracking Relevance analysis Deciding based on the results Testing Relevance analysis
https://docs.microsoft.com/en-us/office365/securitycompliance/tagging-and-assessment-in-advanced-ediscovery?redirectSourcePath=%252fen-gb%252farticle%252ftagging-and-assessment-in-office-365-advanced-ediscovery-b5c82de7-ed2f-4cc6-becd-db403faf4d18
2018-11-12T20:24:51
CC-MAIN-2018-47
1542039741087.23
[]
docs.microsoft.com
Sent when another object leaves a trigger collider attached to this object (2D physics only). Further information about the other collider is reported in the Collider2D parameter passed during the call. Note: Trigger events will be sent to disabled MonoBehaviours, to allow enabling Behaviours in response to collisions. See Also: Collider2D class, OnTriggerEnter2D, OnTriggerStay2D. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public bool characterInQuicksand; void OnTriggerExit2D(Collider2D other) { characterInQuicksand = false; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnTriggerExit2D.html
2019-07-15T22:17:06
CC-MAIN-2019-30
1563195524254.28
[]
docs.unity3d.com
Builder to create a 'BlockStateProperty' loot condition. This condition compares the the block state obtained from the LootContext and attempts to match it to the given MCBlock. If this comparison succeeds, then the state is further compared according to the rules outlined in the StatePropertiesPredicate. This condition thus passes only if the block matches the given one and, optionally, if all the state properties match according to the predicate given to this loot condition. A 'BlockStateProperty' condition requires a block to be correctly built. Properties may or may not be specified. It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import at the very top of the file. ZenScriptCopy import crafttweaker.api.loot.conditions.vanilla.BlockStateProperty; BlockStateProperty implements the following interfaces. That means all methods defined in these interfaces are also available in BlockStateProperty Sets the block that should be matched by the loot condition. This parameter is required. Returns: This builder for chaining. Return Type: BlockStateProperty ZenScriptCopy BlockStateProperty.withBlock(block as MCBlock) as BlockStateProperty Creates and sets the StatePropertiesPredicate that will be matched against the state's properties. Any changes that have already been made to the predicate will be overwritten, effectively replacing the previous predicate, if any. This parameter is optional. Returns: This builder for chaining. Return Type: BlockStateProperty ZenScriptCopy BlockStateProperty.withStatePropertiesPredicate(builder as Consumer<StatePropertiesPredicate>) as BlockStateProperty
https://docs.blamejared.com/1.16/en/vanilla/api/loot/conditions/vanilla/BlockStateProperty/
2021-07-24T02:14:20
CC-MAIN-2021-31
1627046150067.87
[]
docs.blamejared.com
The What of Zap¶ “A Smooth Permittivity Function for Poisson-Boltzmann Solvation Methods,” J. Andrew Grant, Barry T. Pickup and Anthony Nicholls, J. Comp. Chem, Vol 22, No.6, pgs 608-640, April 2001. ZAP is, at its heart, a Poisson-Boltzmann (PB) solver. The Poisson equation describes how electrostatic fields change in a medium of varying dielectric, such as an organic molecule in water. The Boltzmann bit adds in the effect of mobile charge, e.g. salt. PB is an effective way to simulate the effects of water in biological systems. It relies on a charge description of a molecule, the designation of low (molecular) and high (solvent) dielectric regions and a description of an ion-accessible volume and produces a grid of electrostatic potentials. From this, transfer energies between different solvents, binding energies, pKa shifts, pI’s, solvent forces, electrostatic descriptors, solvent dipole moments, surface potentials and dielectric focussing can all be calculated. As electrostatics is one of the two principal components of molecular interaction (the other, of course, being shape), ZAP is OpenEye’s attempt to get it right.
https://docs.eyesopen.com/toolkits/csharp/zaptk/thewhatofzap.html
2021-07-24T02:19:12
CC-MAIN-2021-31
1627046150067.87
[]
docs.eyesopen.com
Make sure you write an excerpt for this page in the excerpt macro, but keep it short and simple. EG. "The pages in this section cover topics related to..." This "Children Display macro takes care of laying out a list of all of the pages that are direct children of this page including their excerpts for easy navigation.
https://docs.homeseer.com/plugins/viewsource/viewpagesrc.action?pageId=17467059
2021-07-24T00:50:47
CC-MAIN-2021-31
1627046150067.87
[]
docs.homeseer.com
no return value. Requests for this method are similar to the following example. In this example, the attributes are cleared by specifying {} for them: { "method": "ModifyAccount", "params": { "accountID" : 25, "status" : "locked", "attributes" : {} }, "id" : 1 } This method returns a response similar to the following example: { "id" : 1, "result" : { } }
https://docs.netapp.com/sfe-113/topic/com.netapp.doc.sfe-api/GUID-4249C20B-E294-49D4-B3FF-CC35E651FA33.html
2021-07-24T01:40:25
CC-MAIN-2021-31
1627046150067.87
[]
docs.netapp.com
Summary Themed Login and Sign-up Page One of our most requested features since we released RefinedTheme for Jira Service Desk is to have the theme apply to the login page and we are pleased to inform you it's here! - RFJSD-15Getting issue details... STATUS When a user comes to the login or sign up page for your Jira Service Desk, the user will see the theme that is applied to your global site. This means that if you are not running RefinedTheme as the global theme, theses pages will look like default JSD, but when you have RefinedTheme applied globally, it will show a beautifully themed page. New and Improved Search Experience We received a number of requests asking for improvement to the search UI and the way search results display. We took this feedback on board and now introduce a completely new way of working with searches. UI improvements We've made the search results much clearer to read, and much easier to navigate. The first thing you will notice is the search results from the search module displayed in a pop-up box rather than a list. This makes it a lot easier for users to get an overview. The search results will display in a single column layout or a two column layout depending on the type of hits matching your search term. Extended search results The search will show results sourced from Recommended Links - Promoted Search Results, Knowledge Base articles, Request Types and Navigation (items in your site structure). Search context This applies to site search, category search, service desk search. When you type a search entry into the Search Highlight module, the search results will be contextual to where you are. This means that if you are on a site you will search the contents of that site, if you are on a category you will search the contents of that category. You can change the context to search globally by clicking the x indicating the context. Previews and Drafts for Layouts If you're a user of our other server product Refined for Confluence you've probably been enjoying previews and drafts for your site and category homes. Perhaps you've even been waiting for this feature to hit RefinedTheme for Jira Service Desk. Well, the wait is now over! With this version 2.1 we introduce Previews and Drafts for Layouts. Any changes that are made in the Edit mode of a site home, category home, or service desk portal will be saved as a draft. Using the preview button you can see exactly how your changes will look in the context of the theme. You will also be able to see who made changes last to the draft. Learn more here: Working with Drafts & Previews Import and Export of Layouts While we're on the subject of layouts for your homes and portals, we have another new feature for you; Import and export of layouts. With this feature you can export a layout that you are particularly happy with, and import it to a different JSD portal or home, or even a portal on a different instance. This is another one of our top requests. Using this feature you will be able to export/import for instance between a test or staging instance to your production instance instead of having to manually set everything up again. Please note that if you are using this functionality between two instances with different context paths, you will have to reselect images to layout modules after importing the layout. Learn more about this feature here: Export and Import of Layouts Announcement Banners Announcement banners have been one of the most voted for feature requests for RefinedTheme for Jira Service Desk. With so much demand for this feature did our best to ensure we deliver a super useful feature. You can now try it for yourself. - RFJSD-14Getting issue details... STATUS Add announcements by clicking Announcements in the Admin menu (settings cog). The banner will display at the top of the customer facing pages you decide. You can choose exactly which page of your support site you want to publish the announcement on by selecting it from the drop down menu in announcements settings dialogue. An announcement that is added to a higher level (site or category) will be visible on any page that is inside that site or category. The banner won't display on the service desk portals that don't have a theme applied. In the same Announcements settings dialogue you can decide if you want the announcement "Published" and/or "Closable". You can also copy announcements and delete them from this dialogue. You can choose from three different types of announcements: - Info (blue) - Warning (yellow/orange) - Alert (red) If you want a custom color for your banner, add custom CSS in the theme configuration. Learn more here: Announcement Banners Layout Options for Requests Including Show Assignee In this version we give you some more flexibility in how your data is shown to customers. For the request layout, we've added these options: - Show assignee on request. - Description on top of activity or the activity on top of the description. Please note that these settings apply globally for all your Service Desk projects. Learn more about request layout settings here: Request Layout. Status Colors on Requests Another helpful feature we're adding is displaying the color of the request status category. Status colors will follow the colors scheme in Jira, and it is possible to override the colors by targeting the CSS classes. The color of the status is visible in requests, My Requests Module, and request view. Image Bank Working with images in themes and modules have up until now meant uploading images to the element you want to use it on. We are super pleased to introduce an image bank. This is a place where you can upload images intended to be used by all portals and themes. We're hoping this will mean (at least) two things: - If you have a design department they can get you the images they want you to use, and these images are available in portals and themes. - You will not have to upload an image to all the places you want to use it. Images to the Image Bank are uploaded in Theme Configuration > Images. Image Bank images in module Image Bank images in theme Improvements to Knowledge Base Up until version 2.1, you've been able to see articles in Confluence in a popup showing the article and an optional link to the page in Confluence. In this version we introduce a pagetree of your KB articles displayed when shown as a popup. This is especially relevant to those on a setup with the same users in Confluence and Jira Service Desk. This will allow users to browse the space using the page tree on the left hand side, staying in context of Jira Service Desk. Tutorial: Browsing Confluence from Jira Service Desk Article Popup Updates: Custom settings for Search Highlight? You'll get popup after this release Fo any of you that are running the custom setting for the Search Highlight you'll get a changed setting from "Open in new window" to "popup", and this is simply because we believe that with this new setting, browsing Confluence from JSD will be a nice experience. If you have any questions, please contact us. Global Search We added a global search that you can access in the top right corner of any page on your support site (site, category, customer portal, request view). You can apply custom settings to connect the global search to a knowledge base. Learn how to set this up here: Knowledge Base Settings. Admin Functions Available Through Site, Category or Portal Menu In the spirit of reducing your clicks we've improved the admin navigation so that you can access admin functions via the site home, category home and/or service desk portal. This is a lot to take in with so many new features here. So if you have any questions for us, you are welcome to reach out. If you have any feedback or find any bugs, please visit support.refinedwiki.com and file a ticket. 1 Comment Janette Hagerlund
https://docs.refined.com/display/RTJSDS/Version+2.1
2021-07-24T00:56:32
CC-MAIN-2021-31
1627046150067.87
[]
docs.refined.com
Overview ThreatSTOP’s Centralized Manager (TSCM) is a Linux-based virtual machine that powers the integration between ThreatSTOP’s Threat Intelligence Platform and the following device families: - A10 Thunder - Cisco ASA - Cisco ISR - Cisco Firepower - Fortinet Fortigate - Palo Alto Networks PA Series - Infoblox NIOS The TSCM provide a command line tool to link a device entry in the ThreatSTOP portal and the actual device. Its purpose is to retrieve policy updates and update the device’s ACLs with the latest data, and to forward logs to the ThreatSTOP portal for reporting on network connections that were blocked by that policy. Network configuration The TSCM is available as a Ubuntu-based virtual machine (A10, ASA, ISR, Firewpower, Fortigate, PAN-OS). Red Hat images (RHEL 7 and CentOS 7) are also available for the A10 ADC and TPS integration. The TSCM image is configured to use DHCP during its initial boot. It can be reconfigured to use a static IP v4 configuration using the tsadmin network command. The command will first ask to choose between DHCP and static settings. In either configuration, the TSCM will keep its current IP address until it is rebooted, which allows validataing the new connectivity settings before making them permanent. Using DHCP, the command will display the new IP address after it’s successfully retrieved. $ tsadmin network Use DHCP[y/n]: y Applying DHCP settings... *** Verify IP settings *** Interface Address Method: DHCP Apply settings[y/n]: y [Backing up current configuration] using DHCP template applying permanent config to /etc/network/interfaces [ ok ] Restarting networking (via systemctl): networking.service. The IP address: 172.16.1.138 will disappear on next reboot Your current IP: 172.16.1.138 Success: Network setup complete. Using a static network configuration, the command will prompt for the IP address, netmask, gateway and DNS server(s). Use DHCP[y/n]: n IP Address: 172.16.2.100 Adding: IP Address '172.16.2.100' Subnet Mask (Valid formats: 255.255.240.0 or /24): /24 Appears to be a valid network: IPv4Network('172.16.2.0/24') Default Route Address: 172.16.2.1 Adding: Default Route Address '172.16.2.1' [Adding DNS Server] Adding: DNS Server Address '172.16.2.2' Add Another DNS Server[y/n]: n *** Verify IP settings *** Address: IPv4Address('172.16.2.100') Netmask: IPv4Address('255.255.255.0') Netmask Bits: /24 v4_or_v6: 4 Default Route: IPv4Address('172.16.2.1') DNS Servers: [IPv4Address('172.16.2.2')] Current IP: IPv4Address('172.16.1.138') Apply settings[y/n]: y Please test network connectivity Try running: ping 172.16.2.100 Can you communicate with the new address?[y/n]: y [Backing up current configuration] using STATIC template applying permanent config to /etc/network/interfaces [ ok ] Restarting networking (via systemctl): networking.service. The IP address: 172.21.70.138 will disappear on next reboot Success: Network setup complete. - If the TSCM isn’t reachable on the new IP address as expected, the command can be run again. - If you are unsure about the current IP address of the TSCM, you can check its IP on the video console provided by your Hypervisor. You can also login into the console to change the network configuration with the tsadmin network command. TSCM Credentials - The default password for the threatstop account is threatstop. - The TSCM operations (tsadmin command) can only be run using this account. - The password can be changed using the tsadmin account command. $ tsadmin passwod [INFO ] : Changing account password Ctrl + C to cancel Changing password for threatstop. (current) UNIX password: ********** Enter new UNIX password: ********** Retype new UNIX password: ********** passwd: password updated successfully System maintenance In addition to the ThreatSTOP software pre-packaged on the virtual machine, please note that the OS has been configured to run syslog-ng instead of the standard syslog daemon (rsyslogd). Use of the ThreatSTOP TSCM VM should require little to no management with the exception of package updates using apt (Ubuntu) and yum (Red Hat). You can change these settings without impacting the TSCM application: - change the network configuration to use a static IP address, routes and/or DNS servers. - change credentials for the threatstop account Please note that we recommend not installing additional applications on the virtual machine, but common applications (e.g. backup solutions) should not impact the TSCM functionality. Upgrading base-files on Ubuntu The TSCM modifies the /etc/issue file on Ubuntu. When the Ubuntu package is updated, its installation will warn about a conflict. It is safe to choose the update or keep option, and the file will be replaced during the next boot. CLI reference All TSCM operations are performed by running the tsadmin command as the threatstop user. The following tables show the available operations and their options. tsadmin add Devices using TSCM with Web Automation integration (Web based) - Syntax: tsadmin add –type auto –device_id=<tdid> –auto_key=<product_key> - Will proceed with the automatic configuration of a new device. - tdid (required): Device ID retrieved from the ThreatSTOP Portal - product_key (required): Product Key retrieved from the ThreatSTOP Portal Devices using TSCM integration (CLI based) - Syntax: tsadmin add –type <device type> <device nickname> [–advanced] - Will proceed with the manual configuration of a new device. You will be prompted for the settings. - Arguments - device type (required): one of a10, asa, isr, fortinet, pan - device nickname (required): will be used to identify the device on the TSCM - –advanced (optional): prompt for advanced network settings and device configuration - Example threatstop@tsclient:~$ tsadmin add --type asa my_asa Configuring Cisco ASA device. Enter the device ID (tdid): tsadmin remove - Syntax: tsadmin remove <device nickname> - Will remove the device entry from the TSCM, thus disabling policy updates and log forwarding - You can re-run tsadmin add to re-add the device - Arguments - device nickname (required). Can be retrieved using tsadmin list. tsadmin configure - Syntax: tsadmin configure <device nickname> - Will reconfigure an existing device entry. - Existing settings will be presents as default. - Arguments - device nickname (required). Can be retrieved using tsadmin list. - Example threatstop@tsclient:~$ tsadmin configure my_asa Configuring Cisco ASA device. Enter the device ID (tdid): [default tdid_12345678] tsadmin update - Syntax: tsadmin update <device nickname> - Will retrieve the policy from ThreatSTOP’s Policy servers and update ACLs on the device. - This command is performed automatically every hour using cron - Arguments - device nickname (required). Can be retrieved using tsadmin list. threatstop@tsclient:~$ tsadmin update my_asa [INFO ] : CONFILE = /opt/threatstop/etc/devices/my_asa.conf [INFO ] : Previous configuration found ... Loading config data... [INFO ] : Locking current execution instance [INFO ] : Starting /opt/threatstop/bin/ts-asa v3.35 on Fri May 25 23:40:39 2018 [INFO ] : Verifying mandatory parameters state [INFO ] : Building allow/deny lists .... tsadmin list - Syntax: tsadmin list - Show the list and basic settings of devices currently configured - Arguments - none - Example threatstop@tsclient:~$ tsadmin list | Device name | Type | Device ID | Management IP | Log upload ID | Log uploads | | my_asa | asa | tdid_12345678 | 172.21.50.3 | tdid_12345678 | enabled | tsadmin show - Syntax: tsadmin show <device nickname> - Show the current settings of an existing device entry. - Arguments - device nickname (required). Can be retrieved using tsadmin list. - Example threatstop@tsclient:~$ tsadmin show my_asa Setting name Value ---------------------------------------- --------------------------- Device Name my_asa Device Type Cisco ASA Automatic Configuration disabled Automatic Updates enabled Device Auto-configuration Key Block List basic.threatstop.local Allow List dns.threatstop.local Log association (External IP address) Log association (Device ID) tdid_12345678 DNS Server(s) for Updates ts-dns.threatstop.com DNS port for Updates 53 Device Management IP Address 172.21.50.3 All Syslog IP Addresses 172.21.50.3 Log Size for Uploads 100 Log Uploads enabled List Updates enabled Username admin Syslog Forward disabled additional IP(s) object_group_block threatstop-block object_group_allow threatstop-allow custom_username_prompt not customized custom_password_prompt not customized maxpolicysize 30000 tsadmin logs - Syntax: tsadmin logs - Will perform a log upload - Requires log files to be present; the command will exit if no log files are available. - If multiple devices are configured, logs will be rotated and uploaded for each one. - Arguments - none - Example/my_asa/syslog.1] stats [INFO ] : Processing [/var/log/threatstop/devices/devicename/syslog.1] log file [INFO ] : Start sending data [INFO ] : Preparing connection data [INFO ] : Connecting to [INFO ] : Upload was successful [200 OK] [INFO ] : Completed processing for device [my_asa] [INFO ] : Finish ThreatSTOP logupload operation at 24/05/2018 19:34:10 after 00:00:05 [INFO ] : Log upload client exited tsadmin version - Syntax: tsadmin version - Display the version of the TSCM package - Arguments - none - Example threatstop@tsclient:~$ tsadmin version 1.36
https://docs.threatstop.com/tscm_cli.html
2021-07-24T01:55:32
CC-MAIN-2021-31
1627046150067.87
[]
docs.threatstop.com
Contents: Contents: When. -. Tip: Use the Transform Builder for performing modifications to the transformation you selected from the Search panel or a suggestion card. See Transform Builder. Sample Loading very large datasets in Trifacta SaaS.. Tip: Some operations, such as unions and joins, can invalidate source row number information. To capture this data into your dataset, it's best to add this transformation early in your recipe.. SaaS.. Tip: Where possible, you should set the data type for each column to the appropriate type. Trifacta SaaS does maintain statistical information and enable some transformation steps based upon data type. See Column Statistics Reference. -. Figure: Column data histogram..: Figure: Modifying steps in the Transform Builder. NOTE: The reference data that you are using for lookups must be loaded as a dataset into Trifacta SaaS first.,.. This page has no comments.
https://docs.trifacta.com/display/AWS/Transform+Basics
2021-07-24T01:19:52
CC-MAIN-2021-31
1627046150067.87
[]
docs.trifacta.com
User’s Guide¶¶ General Process of Manual Annotation¶.. Annotation¶¶ Below are detail about both biological principles and technical aspects to consider when editing a gene prediction.¶¶. Add UTRs¶ . Exon Structure Integrity¶..<< Figure 2. Apollo view, zoomed to base level.¶¶.¶.¶¶ Get Sequences¶ Select one or more exons, or an entire gene model of interest, and retrieve the right-click menu to select the ‘Get sequence’ function. Chose from the options to obtain protein, cDNA, CDS or genomic sequences. Merge Exons, Merge Transcripts¶ Select each of the joining exons while holding down the ‘Shift’ key, open the right-click menu and select the ‘Merge’ option. Add an Exon¶¶¶ Select the exon using a single click (double click selects the entire model), and select the ‘Delete’ option from the right-click menu. Check whether deleting one or more exons disrupts the reading frame, inserts premature ‘Stop’ signals, etc. Flip the Strand of Annotation¶. Complex Cases¶ Merge Two Gene Predictions on the Same Scaffold¶<< Figure 4. Edge-matching in Apollo.¶¶ It is not yet:” Split a Gene Prediction¶: -¶¶. The Information Editor¶ (’Needs review’), or has already been ‘Approved’ using the ‘Status’ buttons. Users will also be able to input information about their annotations in fields that capture - Crossed references to other databases in ‘DBXRefs’. - Additional ‘Attributes’ in a ‘tag/value’ format that pertain to the annotation. - References to any published data in the PubMed database using ‘Pubmed IDs’. -. Add Comments¶. Add Database Crossed-references, PubMed IDs, and GO IDs¶ ‘PubMed ID’ using the provided field, and available functional information should be added using GO IDs as appropriate. The process to add information to these tables is the same as described for the ‘Comments’ tables. Add Attributes¶. (No need for) Saving your Annotations¶ Apollo immediately saves your work, automatically recording it on the database. Because of this, your work will not be lost in the event of network disruptions, and no further actions are required in order to save your work. Exporting Data¶¶ The Apollo Demo uses the genome of the honey bee (Apis mellifera). Below are details about the experimental data provided as supporting evidence. Evidence in support of protein coding gene models¶ Consensus Gene Sets comparison:¶ - Protein Coding Gene Predictions Supported by Biological Evidence:¶ - NCBI Gnomon - Fgenesh++ with RNASeq training data - Fgenesh++ without RNASeq training data - NCBI RefSeq Protein Coding Genes - NCBI RefSeq Low Quality Protein Coding Genes Ab initio protein coding gene predictions:¶ - Augustus Set 12 - Augustus Set 9 - Fgenesh - GeneID - N-SCAN - SGP2 Transcript Sequence Alignment:¶ - Additional Information About Apollo¶.
https://genomearchitect.readthedocs.io/en/latest/UsersGuide.html
2021-07-24T00:52:39
CC-MAIN-2021-31
1627046150067.87
[array(['_images/Apollo_User_Guide_Figure2.jpg', '_images/Apollo_User_Guide_Figure2.jpg'], dtype=object) array(['_images/Apollo_Users_Guide_Figure3.png', '_images/Apollo_Users_Guide_Figure3.png'], dtype=object) array(['_images/Web_Apollo_User_Guide_edge-matching.png', '_images/Web_Apollo_User_Guide_edge-matching.png'], dtype=object)]
genomearchitect.readthedocs.io
. For details, please see the following tables of contents (which are organized by area of interest.)
http://docs.paramiko.org/en/1.18/index.html
2021-07-24T01:43:41
CC-MAIN-2021-31
1627046150067.87
[]
docs.paramiko.org
The Fedora Project Leader The Fedora Project Leader (or Council. Matthew Miller is the current Fedora Project Leader. He led the Fedora.next initiative for a strategic approach to Fedora’s second decade, as well as the transition from the Fedora Board to the current Council governance model. Matthew created the BU Linux distribution for Boston University in 1999 and maintained that through 2008, when he moved to the Harvard School of Engineering and Applied Sciences to work on high performance computing and this up-and-coming “cloud” thing. He has been at Red Hat since 2012, first working on Fedora Cloud and becoming FPL in June, 2014. Contact Matthew as mattdm on Libera.Chat IRC, as @mattdm on Twitter, or on the Council-Discuss mailing list. Previous Fedora Project Leaders - Robyn Bergeron February 2012 – June 2014 (Fedora 17 – 21) - Jared Smith July 2010 – February 2012 (Fedora 14 – 16) - Paul W. Frields February 2008 – July 2010 (Fedora 9 – 14) - Max Spevack February 2006 – February 2008 (Fedora Core 5 – Fedora 9) - Greg DeKoenigsberg August 2005 – February 2006 (Fedora Core 4 & 5) - Cristian Gafton January 2004 – August 2005 (Fedora Core 2, 3, & 4) - Michael Johnson July 2003 – January 2004 (Fedora Core 1)
https://docs.fedoraproject.org/fa/council/fpl/
2021-07-24T02:46:56
CC-MAIN-2021-31
1627046150067.87
[]
docs.fedoraproject.org
_SESSION_MILESTONE_HOUR Description This table describes caller activity within an SDR session. The same columns and column descriptions apply to other AGT_SDR_SESS_BLOCK_*. SDR_CALL_DISPOSITION_KEY The key that is used to join the SDR_CALL_DISPOSITION dimension to the fact tables. SDR_CALL_TYPE_KEY The key that is used to join the SDR_CALL_TYPE dimension to the fact tables. SDR_ENTRY_POINT_KEY The key that is used to join the SDR_ENTRY_POINT dimension to the fact tables. SDR_EXIT_POINT_KEY The key that is used to join the SDR_EXIT. SDR_MILESTONE_KEY The key that is used to join the the SDR_MILESTONE dimension to the fact tables. CALLS The total number of interactions that entered the Designer application during the reporting interval AGR_SET_KEY The surrogate key that is used to join this aggregate table to the AGR_SET table. Subject Areas No subject area information available.
https://docs.genesys.com/Documentation/RAA/latest/PDMMS/Table-AGT_SDR_SESS_MILESTONE_HOUR
2021-07-24T01:59:11
CC-MAIN-2021-31
1627046150067.87
[]
docs.genesys.com
MultiMeshInstance¶ Inherits: GeometryInstance < VisualInstance < Spatial < Node < Object Node that instances a MultiMesh. Description¶ MultiMeshInstance is a specialized node to instance GeometryInstances based on a MultiMesh resource. This is useful to optimize the rendering of a high amount of instances of a given mesh (for example trees in a forest or grass strands).
https://docs.godotengine.org/en/3.2/classes/class_multimeshinstance.html
2021-07-24T01:09:14
CC-MAIN-2021-31
1627046150067.87
[]
docs.godotengine.org
[Pro] How to customize your footer credits in Customizr Pro? The footer may not be the most noticeable part of your site, but it is one of the most important areas because it is the place where the copyrights and credits rolls out. What is a copyright and why do I need it? A copyright is your claim on the content of your site. According to the Copyright Acts, any content, as soon as it is created, gets copyrighted to the author. So, it is not absolutely required to put up a copyright notice on your site. But, it is a redundant yet prevalent practice to put up a copyright notice, usually in the footer. It is useful to show a copyright in the footer and here is how you can do it in the Customizr WordPress theme. Accessing the footer customizer The Footer Credits can be access from the customizer from Footer > Footer Credits. You can enable the Footer Credits section by checkign against Enable the footer copyrights and credits. Most of the settings here are automatically filled. The default values should be fine, as they are. Should you want to make any changes, you can make them here. - The first setting under the Copyright section is Copyright text. Here you have to give the copyright symbol ©, year and optionally a text like All rights reserved. - The next setting is the Site Name. You have to enter your site's name here. - The last setting in this section is Site Link. Give your site's URL here. Credits With Customizr Pro, you can customize the default copyright " designed by Press Customizr". - You can enable Credits text by checking against Display designer credits. (enabled by default) - The next setting is Credit text. The default value is Designed by. - Against Designer name, the default value is Press Customizr and the Designer link points to, the site of PressCustomizr. Unless you have customized the site a lot, it would not be necessary to change these settings. If, indeed, you have customized your site a lot and want to display your name, use Customized by as the Credit text and give your name as Designer name. With that, you are done with the Footer Credits. Save and publish. You should see something like this at the bottom of every page of your site. These are very simple settings. But, do not miss to add them to improve the professional look of your site. Doc created by: Menaka S.
https://docs.presscustomizr.com/article/168-pro-how-to-customize-your-footer-credits-in-customizr-pro
2021-07-24T00:49:00
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/561cb209c697916fa4a83eea/file-wmRQMPZrIM.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/561cb218c697916fa4a83eeb/file-UA813BFssh.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/561cb259c697916fa4a83eed/file-HRW7D6Tx69.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/561cb2689033600a7a36d6a3/file-GbCUWRp1eL.png', None], dtype=object) ]
docs.presscustomizr.com
How do I remove a device from MDM? By default, Apple allows the MDM profile to be removed from devices via Settings at any time. However, there are ways to enroll devices that restrict this capability, as discussed in this article. To fully remove a device from all forms of management, the necessary process will ultimately depend on how the device became enrolled in MDM. If the device was enrolled manually or using Apple Configurator: there are two ways that devices can be unenrolled. 1. Manually: - Open Settings on the device. - Go to General > Device Management. - Select the MDM profile. - Select 'Remove Management'. 2. Via SimpleMDM: - Click the device name. - Click 'Actions'. - Select 'Remove'. This will send a command to the device instructing it to delete the MDM profile. Note: if the device has Supervised Mode enabled and you would like to disable it, you will need to erase the device even after removing it from MDM. If the device was enrolled using Automated Enrollment with Apple Business/School Manager (formerly DEP enrollment): there are additional steps required to remove management. - Release the device from your Apple Business Manager account:. Note: this step must happen first - otherwise the device will automatically re-enroll after being erased. - Delete the device record from SimpleMDM. - Erase the device.
https://docs.simplemdm.com/article/127-how-do-i-remove-a-device-from-mdm
2021-07-24T00:24:49
CC-MAIN-2021-31
1627046150067.87
[]
docs.simplemdm.com
Introduction For more information, please read the block description. Block type: DATA This block provides access to Digital Surface Model (DSM) and Digital Terrain Model (DTM) products in the NEXTMap Elevation Data Suite with a global coverage at 5 6 250 sqkm. Example queries Example query using bbox without clipping: { "nextmapone-5m:1": { "bbox": [ 55.30105590820313, 25.221093107315813, 55.32852172851563, 25.24516175079434 ] } } Output format { "type": "FeatureCollection", "features": [ { "type": "Feature", "bbox": [ 55.30000000000001, 25.22500000000001, 55.35000000000002, 25.25 ], "id": "c36a041c-c674-4a1e-bbe5-b41c4293007a", "geometry": { "type": "Polygon", "coordinates": [ [ [ 55.325, 25.225 ], [ 55.3, 25.225 ], [ 55.3, 25.25 ], [ 55.325, 25.25 ], [ 55.35, 25.25 ], [ 55.35, 25.225 ], [ 55.325, 25.225 ] ] ] }, "properties": { "up42.data_path": "c36a041c-c674-4a1e-bbe5-b41c4293007a.tif" } } ] }
https://docs.up42.com/blocks/data/nextmapone-5m/
2021-07-24T01:24:12
CC-MAIN-2021-31
1627046150067.87
[]
docs.up42.com
Feed files¶ (versions). An Identity is a way to recognise an Implementation (e.g. a cryptographic digest). A Retrieval method is a way to get an Implementation (e.g. by downloading from a web site). A Command says how to run an Implementation as a program. A Dependency indicates that one component depends on another (e.g. Gimp requires the GTK library). A Binding says how to let the program locate the Implementations when run. A Constraint limits the choice of a dependency (e.g. Gimp requires a version of GTK >= 2.6). Terminology Originally the word 'interface' was used to mean both 'interface' and 'feed', so don't be confused if you see it used this way. Introduction¶ Feed files are introduced in the Packager's Documentation. They have the following syntax ( ? follows optional items, * means zero-or-more, order of elements is not important, and extension elements can appear anywhere as long as they use a different namespace): <?xml version='1.0'?> <interface xmlns='' min- * <feed src='../img/...' langs='...' ? * <replaced-by ? [group] * [implementation] * [entry-point] * </interface> min-injector-version - This attribute gives the oldest version of 0install that can read this file. Older versions will tell the user to upgrade if they are asked to read the file. Versions prior to 0.20 do not perform this check, however. If the attribute is not present, the file can be read by all versions. uri - This attribute is only needed for remote feeds (fetched via HTTP). The value must exactly match the expected URL, to prevent an attacker replacing one correctly-signed feed with another (e.g., returning a feed for the shredprogram when the user asked for the backupprogram). <name> - a short name to identify the interface (e.g. "Foo") <summary> - a short one-line description; the first word should not be upper-case unless it is a proper noun (e.g. "cures all ills"). Supports localization. <description> - a full description, which can be several paragraphs long (optional since 0.32, but recommended). Supports localization. - the URL of a web-page describing this interface in more detail <category> - a classification for the interface. If no type is given, then the category is one of the 'Main' categories defined by the freedesktop.org menu specification. Otherwise, it is a URI giving the namespace for the category. <needs-terminal> - if present, this element indicates that the program requires a terminal in order to run. Graphical launchers should therefore run this program in a suitable terminal emulator. <icon> - an icon to use for the program; this is used by programs such as AddApp and desktop integration. You should provide an icon of the type image/png( .png) for display in browsers and launchers on Linux. For Windows apps you should additionally provide an icon of the type image/vnd.microsoft.icon( .ico). <feed> - the linked feed contains more implementations of this feed's interface. The langsand archattributes, if present, indicate that all implementations will fall within these limits (e.g. arch='*-src'means that there is no point fetching this feed unless you are looking for source code). See the <implementation>element for a description of the values of these attributes. <feed-for> - the implementations in this feed are implementations of the given interface. This is used when adding an optional extra feed to an interface with 0install add-feed(e.g. a local feed for a development version). <replaced-by> - this feed's interface (the one in the root element's uriattribute) has been replaced by the given interface. Any references to the old URI should be updated to use the new one. Groups¶ A group has this syntax: <group version='...' ? released='...' ? main='...' ? self-test='...' ? doc-dir='...' ? license='...' ? released='...' ? stability='...' ? langs='...' ? arch='...' ? > [requires] * [group] * [command] * [binding] * [implementation] * [package-implementation] * </group> All attributes of the group are inherited by any child groups and implementations as defaults, but can be overridden there. All dependencies ( requires), bindings and commands are inherited (sub-groups may add more dependencies and bindings to the list, but cannot remove anything). Implementations¶ An implementation has this syntax (an unspecified argument is inherited from the closest ancestor <group> which defines it): <implementation id='...' local-path='...' ? [all <group> attributes] > <manifest-digest [digest] * /> * [command] * [retrieval-method] * [binding] * [requires] * </implementation> id - A unique identifier for this implementation. For example, when the user marks a particular version as buggy this identifier is used to keep track of it, and saving and restoring selections uses it. However, see the important historical note below. local-path - If the feed file is a local file (the interface uristarts with /) then the local-pathattribute may contain the pathname of a local directory (either an absolute path or a path relative to the directory containing the feed file). See the historical note below. version - The version number. See the version numbers section below for more details. main(deprecated) - The relative path of an executable inside the implementation that should be executed by default when the interface is run. If an implementation has no mainsetting, then it cannot be executed without specifying one manually (with 0install run --main=MAIN). This typically means that the interface is for a library. Note: mainis being replaced by the <command>element. self-test(deprecated) - The relative path of an executable inside the implementation that can be executed to test the program. The program must be non-interactive (e.g. it can't open any windows or prompt for input). It should return with an exit status of zero if the tests pass. Any other status indicates failure. Note: self-testis being replaced by the <command>element. doc-dir - The relative path of a directory inside the implementation that contains the package's documentation. This is the directory that would end up inside /usr/share/docon a traditional Linux system. released - The date this implementation was made available, in the format YYYY-MM-DD. For development versions checked out from version control this attribute should not be present. stability - The default stability rating for this implementation. If not present, testingis used. See the stability section below for more details. langs - The natural language(s) which this package supports, as a space-separated list of languages codes (in the same format as used by the $LANGenvironment variable). For example, the value en_GB frwould be used for a package supporting British English and French. Supported since 0.48. Note that versions before 0.54 require the region separator to be _(underscore), while later versions also allow the use of -for consistency with the xml:langformat. arch - For platform-specific binaries, the platform for which this implementation was compiled, in the form os-cpu. 0install knows that certain platforms are backwards-compatible with others, so binaries with arch="Linux-i486"will still be available on Linux-i686machines, for example. Either the osor cpupart may be *, which will make it available on any OS or CPU. If missing, the default is *-*. See also: Valid architecture names. license - License terms. This is typically a Trove category. See the PyPI list for some examples (the leading License ::is not included). The manifest-digest element is used to give digests of the .manifest file using various hashing algorithms (but see the historical note below). Having multiple algorithms allows a smooth upgrade to newer digest algorithms without breaking old clients. Each non-namespaced attribute gives a digest, with the attribute name being the algorithm. Example <manifest-digest For non-local implementations (those without a local-path attribute), the <implementation> element contains a set of retrieval methods, each of which gives a different way of getting the implementation (i.e. of getting a directory structure whose digest matches the ones given). Currently, 0install. Unrecognised elements inside an implementation are ignored. Historical note about id¶ 0install >= 0.45 generally treats the ID as a simple identifier, and gets the local path (if any) from the local-path attribute and the digests from the <manifest-digest>. 0install < 0.45 ignores the local-path attribute and the <manifest-digest> element. If the ID starts with . or / then the ID is also the local path; otherwise, it is the single manifest digest. For backwards compatibility, 0install >= 0.45 will treat an ID starting with . or / as a local path if no local-path attribute is present, and it will treat it as an additional digest if it contains an = character. Therefore, if you want to generate feeds compatible with past and future versions: - If you have a digest, set the ID to sha1new=...and put the sha256 digest in the <manifest-digest>. - If you have a local implementation then set both idand local-pathto the pathname. Commands¶ The main attribute above provides a simple way to say how to run this implementation. The <command> element (supported since 0.51, released Dec 2010) provides a more flexible alternative. <command name='...' path='...' ? > [binding] * [requires] * [runner] ? <arg> ... </arg> * <for-each item-from='...' separator='...'? > ... </for-each> * </command> name - By default, 0install executes the runcommand, but the --commandoption can be used to specify a different one. 0test runs the testcommand (replacing the old self-testattribute) and 0compile runs the compilecommand (replacing the compile:commandattribute). path - The relative path of the executable within the implementation (optional if <runner>is used). Additional arguments can be passed using the <arg> element. Within an argument, ${name} is expanded to the value of the corresponding environment variable. These arguments are passed to the program before any arguments specified by the user. If an environment variable should be expanded to multiple arguments, use <for-each>. The variable in the item-from attribute is split using the given separator (which defaults to the OS path separator, : on POSIX and ; on Windows) and the arguments inside the element are added for each item. The current item is available as ${item}. If the variable given in item-from is not set or is empty, no arguments are added. See below for an example. Versions of 0install before 1.15 ignore <for-each> elements and their contents. Command-specific dependencies can be specified for a command by nesting <requires> elements. For example, an interpreter might only depend on libreadline when used interactively, but not when used as a library, or the test command might depend on a test framework. Command-specific bindings (0install >= 1.3) create a binding from the implementation to itself. For example, the test command may want to make the run command available in $PATH using <executable-in-path>. The <runner> element introduces a special kind of dependency: the program that is used to run this one. For example, a Python program might specify Python as its runner. <runner> is a subclass of <requires> and accepts the same attributes and child elements. In addition, you can specify arguments to pass to the runner by nesting them inside the <runner> element. These arguments are passed before the path of the executable given by the path attribute. Example <command name='run' path="causeway.e-swt"> <runner interface=''> <arg>-cpa</arg> <arg>$SWT_JAR</arg> <for-each <arg>${item}</arg> </for-each> </runner> </command> In this case, 0install will run the equivalent of /path/to/e-interpreter -cpa /path/to/swt.jar $EXTRA_E_OPTIONS /path/to/causeway.e-swt. Package implementations¶ This element names a distribution-provided package which, if present, is a valid implementation of this interface. The syntax is: <package-implementation package='...' distributions='...' ? main='...' ? version='...' ? > [command] * [requires] * </package-implementation> Support for distribution packages was added in version 0.28 of 0install. Earlier versions ignore this element. If the named package is available then it will be considered as a possible implementation of the interface. If main is given then it must be an absolute path. If the distributions attribute is present then it is a space-separated list of distribution names where this element applies. 0install >= 0.45 ranks the <package-implementation> elements according to how well they match the host distribution and then only uses the best match (or matches, if several get the same score). See Distribution integration for a list of supported distributions. Earlier versions of 0install ignore the distributions attribute and process all of the elements. requester may get. Package implementations still inherit attributes and dependencies from their parent group. The doc-dir and license attributes may be given, but version and released are read from the native packaging system. If version is given then only implmentations matching this pattern are used (0install >= 2.14). This allows multiple <packages-implmentation> elements for a single distribution package, which is useful if different versions have different requirements. See Constraints for the syntax. Retrieval methods¶ A retrieval method is a way of getting an implementation. The most common retrieval method is the <archive> element: <archive href='...' size='...' extract='...' ? dest='...' ?. If dest is given (0install >= 2.1), then the archive is unpacked to the specified subdirectory. It is an error to specify a target outside of the implementation directory (e.g. ../foo or attempting to follow a symlink that points out of the implementation). Note that the extract attribute cannot contain / or \ characters, so it can only be used to extract a top-level directory. It is intended for archives that contain their own name as the single top-level entry. The type of the archive is given as a MIME type in the type attribute (since 0install version 0.21). If missing, the type is guessed from the extension on the href attribute (all versions). Known types and extensions (case insensitive) are: application/zip( .zip) application/x-tar( .tar) application/x-compressed-tar( .tar.gzor .tgz) application/x-bzip-compressed-tar( .tar.bz2or .tbz2) application/x-xz-compressed-tar( .tar.xzor .txz) - since version 0.43, since version 2.11 on Windows application/x-lzma-compressed-tar( .tar.lzmaor .tlzma) application/x-lzip-compressed-tar( .tar.lzor .tlz) - since version 2.18, Windows only application/x-zstd-compressed-tar( .tar.zst) - since version 2.18, Windows only application/x-ruby-gem( .gem) - since version 1.0-rc1 application/x-7z-compressed( .7z) - Windows only application/vnd.rar( .rar) - since version 2.18, Windows only application/vnd.ms-cab-compressed( .cab) application/x-msi( .msi) - Windows only application/x-deb( .deb) - not supported on Windows application/x-rpm( .rpm) - not supported on Windows application/x-apple-diskimage( .dmg) - not supported on Windows. You can also fetch individual files (0install >= 2.1). This is useful for e.g. jar files, which are typically not unpacked: <file href='...' size='...' dest='...' executable='true|false' ? /> The file is downloaded from href, must be of the given size, and is placed within the implementation directory as dest. If executable is set to true (0install >= 2.14.2) the file is marked as executable after download. Recipes¶ An implementation can also be created by following a <recipe>: <recipe> ( <archive ...> | <file ...> | <rename ...> | <remove ...> | <copy-from ...> ) + </recipe> In this case, each child element of the recipe represents a step. To get an implementation by following a recipe, a new empty directory is created and then all of the steps are performed in sequence. The resulting directory must have the digest given in the implementation's <manifest-digest>. A recipe containing only a single archive is equivalent to just specifying the archive on its own. If a recipe contains an unrecognised element then the whole recipe must be ignored. <archive ...> - Causes the named archive to be fetched and unpacked over the top of whatever is currently in the temporary directory. It supports the same attributes as when used outside of a recipe. <file ...> - Causes the named file to be fetched and saved over the top of whatever is currently in the temporary directory (0install >= 2.1). It supports the same attributes as when used outside of a recipe. <rename source='...' dest='...'> - Renames or moves a file or directory (0install >= 1.10). It is an error if the source or destination are outside the implementation. <remove path='...'> - Delete the file or directory from the implementation (0install >= 2.1). It is an error if the path is outside the implementation. <copy-from id='...' source='...' ? dest='...' ?> - Copies files or directories from another implementation, e.g., for applying an update to a previous version (0install >= 2.13). The specified id must exactly match the id attribute of another implementation specified elsewhere in the same feed. You can specify the source and destination file or directory to be copied relative to the implementation root. Leave them unset to copy the entire implementation. Tip. Dependencies¶ A <requires> element means that every implementation within the same group (including nested sub-groups) requires an implementation of the specified interface when run. 0install will choose a suitable implementation, downloading one if required. <requires interface='...' importance='...' ? version='...' ? os='...' ? distribution='...' ? source='true|false' ? use='...' ? > [ constraints ] * [ bindings ] * </requires> The constraint elements (if any) limit the set of acceptable versions. The bindings specify how 0install should make its choice known (typically, by setting environment variables). The use attribute can be used to indicate that this dependency is only needed in some cases. By default, 0install >= 0.43 will skip any <requires> element with this attribute set. Earlier versions process all <requires> elements whether this attribute is present or not. 0test >= 0.2 will process dependencies where use="testing", in addition to the program's normal dependencies. This attribute is deprecated - it's usually better to use a <command> for this. The importance attribute (0install >= 1.1) can be either essential (the default; a version of this dependency must be selected) or recommended (no version is also an option, although selecting a version is preferable to not selecting one). The version attribute (0install >= 1.13) provides a quick way to specify the permitted versions. See the Constraints section below. The distribution attribute (0install >= 1.15) can be used to require the selected implementation to be from the given distribution. For example, a Python library available through MacPorts can only be used with a version of Python which is also from MacPorts. The value of this attribute is a space-separated list of distribution names. In addition to the official list of distribution names, the special value 0install may be used to require an implementation provided by 0instal (i.e. one not provided by a <package-implementation>). The os attribute (0install >= 1.12) can be used to indicate that the dependency only applies to the given OS (e.g. os="Windows" for dependencies only needed on Windows systems). The source attribute (0install >= 2.8) can be used to indicate that a source implementation is needed rather than a binary. This may be useful if you want to get e.g. header files from a source package. Note that if you select both source and binary implementations of an interface, 0install does not automatically force them to be the same version. A <restricts> element (0install >= 1.10) can be used to apply constraints without creating a dependency: <restricts interface='...' version='...' ? os='...' ? distribution='...' ? > [ constraints ] * </restricts> Internally, <restricts> behaves much like <requires importance='recommended'>, except that it doesn't try to cause the interface to be selected at all. Constraints¶ Constraints appear on <requires>, <restricts>, <package-implementation> and <runner> elements. They restrict the set of versions from which 0install may choose an implementation. Since 0install 1.13, you can use the version attribute on the dependency element. The attribute's value is a list of ranges, separated by |, any of which may match. Example <restricts interface='' version='2.6..!3 | 3.2.2..'/> This allows Python versions 2.6, 2.7 and 3.3, but not 2.5 or 3. Each range is in the form START..!END. The range matches versions where START <= VERSION < END. The start or end may be omitted. A single version number may be used instead of a range to match only that version, or !VERSION to match everything except that version. There is also an older syntax which also works with 0install < 1.13, where a child node is used instead: <version not-before='...' ? before='...' ? > not-before - This is the lowest-numbered version that can be chosen. before - This version and all later versions are unsuitable. Example <version not- allows any of these versions: 2.4, 2.4.0, and 2.4.8. It will not select 2.3.9 or 2.6. This older syntax is not supported with <packager-implementation>. Bindings¶ Bindings specify how the chosen implementation is made known to the running program. Bindings can appear in a <requires> element, in which case they tell a component how to find its dependency, or in an <implementation> (or group), where they tell a component how to find itself. Environment bindings¶ <environment name='...' (insert='...' | value='...') mode='prepend|append|replace' ? separator='...' ? default='...' ? /> * Details of the chosen implementation. Usually, the (badly-named) insert attribute is used, which adds a path to a file or directory inside the implementation to the environment variable. For example, <environment name='PATH' insert='bin'/> would perform something similar to the bash shell statement export PATH=/path/to/impl/bin:$PATH. Alternatively, you can use the value attribute to use a literal string. For example, <environment name='GRAPHICAL_MODE' value='TRUE' mode='replace'/>. This requires 0install >= 0.52. If mode is prepend (or not set), then the absolute path of the item is prepended to the current value of the variable. The default separator character is the colon character on POSIX systems, and semi-colon on Windows. This can be overridden using separator (0install >= 1.1).. The following environment variables have known defaults and therefore the default attribute is not needed with them: Executable bindings¶ These both require 0install >= 1.2. <executable-in-var name='...' command='...' ? /> <executable-in-path name='...' command='...' ? /> These are used when the program needs to run another program. command says which of the program's commands to use; the default is run. <executable-in-var> stores the path of the selected executable in the named environment variable. Example If a program uses $MAKE to run make, you can provide the required command like this: <requires interface=""> <executable-in-var </requires> <executable-in-path> works in a similar way, except that it adds a directory containing the executable to $PATH. Example If the program instead just runs the make command, you would use: <requires interface=""> <executable-in-path </requires> It is preferable to use <executable-in-var> where possible, to avoid making $PATH very long. Implementation note On POSIX systems, 0install will create a shell script under ~/.cache/0install.net/injector/executables and pass the path of this script. Generic bindings¶ Custom bindings can be specified using the <binding> element (0install >= 2.1). 0install will not know how to run a program using custom bindings itself, but it will include them in any selections documents it creates, which can then be executed by your custom code. The syntax is: <binding path='...' ? command='...' ? ... > ... </binding> If command is given, then 0install will select the given <command> within the implementation (which may cause additional dependencies and bindings to be selected). Otherwise, no command is selected. Any additional attributes and child elements are not processed, but are just passed through. If your binding needs a path within the selected implementation, it is suggested that the path attribute be used for this. Other attributes and child elements should be namespaced to avoid collisions. Example The EBox application launcher allows each code module to specify its dependencies, which are then available in the module's scope as getters. The ebox-edit application depends on the help library like this: <requires interface=""> <binding e: </requires> Versions¶.1 - 1.2-pre - 1.2-pre1 - 1.2-rc1 - 1.2 - 1.2-0 - 1.2-post - 1.2-post1-pre - 1.2-post1 - 1.2.1-pre - 1.2.1.4 - 1.2.2 - 1.2.10 - 3 0install:(contains 1.2.0, 1.2.1, 1.2.2, ...)(contains 2.0.0, 2.0.1, 2.2.0, 2.4.0, 2.4.1, ...) The integers in version numbers must be representable as 64-bit signed integers. Note Version numbers containing dash characters were not supported before version 0.24 of 0install and so a version-modifier attribute was added to allow new-style versions to be added without breaking older versions. This should no longer be used. Stability¶ The feed file also gives a stability rating for each implementation. The following levels are allowed (must be lowercase in the feed files): stable testing developer buggy insecure Stability ratings. 0install. When to use 'buggy'¶. Entry points¶ (only used on the Windows version currently) Entry points allow you to associate additional information with <command> names, such as user-friendly names and descriptions. Entry points are used by the Zero Install GUI to help the user choose a command and by the desktop integration system to generate appropriate menu entries for commands. An entry point is not necessary for a command to work but it makes it more discoverable to end-users. Entry points are top-level elements and, unlike commands, are not associated with any specific implementation or group. One entry point represents all commands in all implementations that carry the same name. An entry point has this syntax: <entry-point * </group> command - the name of the command this entry point represents binary-name - the canonical name of the binary supplying the command (without file extensions); this is used to suggest suitable alias names. app-id - the Application User Model ID; used by Windows to associate shortcuts and pinned taskbar entries with running processes. <needs-terminal> - if present, this element indicates that the command represented by this entry point requires a terminal in order to run. <suggest-auto-start> - if present, this element indicates that this entry point should be offered as an auto-start candidate to the user. <suggest-send-to> - if present, this element indicates that this entry point should be offered as a candidate for the "Send To" context menu to the user. <name> - user-friendly name for the command. If not present, the value of the commandattribute is used instead. Supports localization. <summary> - a short one-line description; the first word should not be upper-case unless it is a proper noun (e.g. "cures all ills"). Supports localization. <description> - a full description, which can be several paragraphs long. Supports localization. <icon> - an icon to represent the command; this is used when creating menu entries. You should provide an icon of the type image/png( .png) for Linux apps and image/vnd.microsoft.icon( .ico) for Windows apps. Localization¶ Some elements can be localized using the xml:lang attribute. Example <summary xml:cures all ills</summary> <summary xml:heilt alle Krankheiten</summary> When choosing a localized element Zero Install will prefer xml:lang values in the following order: - Exactly matching the users language (e.g., de-DE) - Matching the users with a neutral culture (e.g., de) en-US Metadata¶ All elements can contain extension elements, provided they are not in the Zero Install namespace used by the elements defined here. 0install: dc:creator - The primary author of the program. dc:publisher - The person who created this implementation. For a binary, this is the person who compiled it.. Digital signatures¶ When a feed is downloaded from the web, it must contain a digital signature. A feed feed. The signature block must start on a new line, may not contain anything except valid base64 characters, and nothing may follow the signature block. XML signature blocks are supported from version 0.18 of 0install and may be generated easily using the 0publish command. Local interfaces are plain XML, although having an XML signature block is no problem as it will be ignored as a normal XML comment. Valid architecture names¶ The arch attribute is a value in the form OS-CPU. The values come from the uname system call, but there is some normalisation (e.g. because Windows doesn't report the same CPU names as Linux). Valid values for OS include: * Cygwin(a Unix-compatibility layer for Windows) Darwin(MacOSX, without the proprietary bits) FreeBSD Linux MacOSX Windows Valid values for CPU include: * src i386 i486 i586 i686 ppc ppc64 x86_64 armv6l armv7l aarch64 The if-0install-version attribute¶ To make it possible to use newer features in a feed without breaking older versions of 0install, the if-0install-version attribute may be placed on any element to indicate that the element should only be processed by the specified versions of 0install. Example <group> <new-element <fallback if- </group> In this example, 0install 1.14 and later will see <new-element>, while older versions see <fallback>. The syntax is as described in Constraints. Attention 0install versions before 1.13 ignore this attribute and process all elements. Well-known extensions¶ The following are well-known extensions to the Zero Install format: - Capabilities (provides information for desktop integration of applications) Future plans¶ - The extra meta-data elements need to be better specified. - As well as before and not-before, we should support after and not-after. - It should be possible to give a delta (binary patch) against a previous version, to make upgrading quicker. - It should be possible to scope bindings. For example, when a DTP package requires a clipart package, the clipart package should not be allowed to affect the DTP package's environment.
https://docs.0install.net/specifications/feed/
2021-07-24T01:56:17
CC-MAIN-2021-31
1627046150067.87
[array(['../../img/uml/zero-install-feed-classes.png', 'Class diagram for feed files'], dtype=object) array(['../../img/uml/zero-install-id.png', 'Identity classes'], dtype=object) array(['../../img/uml/zero-install-retr.png', 'Retrieval method classes'], dtype=object) array(['../../img/uml/zero-install-binding.png', 'Binding classes'], dtype=object) ]
docs.0install.net
How to Uninstall All Product Components from Your System¶ If you want to stop using Revenue Inbox, you need to: - Remove the Revenue Inbox Add-In from your email application. - Stop the sync process in Revenue Inbox. Your account will be automatically deactivated approximately one month after you perform the steps below. If you want to have your account deactivated, please send a corresponding request to our Support team. Tip Also refer to this FAQ entry for detailed information on custom folders and categories associated with Revenue Inbox. Removing the Revenue Inbox Add-In (Web/Cloud implementation)¶ To remove the Revenue Inbox Add-In from Microsoft Outlook for Windows, do the following: - Click File and then click Manage Add-ins. - In the list, select Revenue Inbox for Salesforce and click the Minus (Uninstall) button. To remove the Revenue Inbox Add-In from Microsoft Outlook for Mac, do the following: - Click Manage Apps in the upper-right corner of the window. - In the list, select Revenue Inbox for Salesforce and click the Minus (Uninstall) button. To remove the Revenue Inbox Add-In from Office 365, do the following: - In Mail, in the Settings menu, click Manage add-ins. - Select the “My add-ins” category, find Revenue Inbox for Salesforce and turn off the switch. Uninstalling Revenue Inbox Add-In (Desktop/MSI implementation)¶ Removing the Add-In installed from an MSI package (please refer to the corresponding section of this article for more information) follows the regular procedure of uninstalling Windows applications: 1. Press ⊞ Win+R to open the Run dialog; 2. Enter appwiz.cpl in the dialog to open Windows Programs and features; 3. Type smart in the Search Programs and Features search bar on the right-hand side; 4. Right-click on Revenue Inbox for Salesforce.com Add-In and select Uninstall. Please note that MS Outlook should be closed when the Add-In is being uninstalled. Stopping Synchronization¶ Note that if you do not stop RI synchronization after removing the Add-In, your emails and events will still be synchronized between your MS Exchange and Salesforce, according to the patterns you previously had, including the custom Salesforce categories. To suspend Revenue Inbox synchronization, do the following: - On the Revenue Inbox Sync dashboard page, click Pause. Full Sync Engine Disabling¶ RI sync component can be completely disabled by the local admin (for Enterprise implementations) or by request sent to our Support team. Sync also gets auto-disabled if you change or remove your access authentication credentials; in this case synchronization will carry out ten MS Exchange connection attempts. If all of them fail, you will get a corresponding automatic warning notification by email; later, all your customization and synchronization settings will be reset to default and the custom Salesforce Emails, Salesforce Tasks, Salesforce Contacts folders will be removed from MS Outlook (the Add-In will not be removed automatically). We would love to hear from you
https://docs.revenuegrid.com/ri/fast/articles/Deactivating-SmartCloud-Connect/
2021-07-24T01:25:11
CC-MAIN-2021-31
1627046150067.87
[array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5832fd7d903360645bfa6aa3.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5832fd9fc697916f5d053175.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b759d640428631d7a8a0e69.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5832fdbc903360645bfa6aac.png', None], dtype=object) array(['../../assets/images/faq/fb.png', None], dtype=object)]
docs.revenuegrid.com
Indexers in a distributed deployment Important: To better understand this topic, you should be familiar with Splunk Enterprise distributed environments, covered in Distributed Deployment.. For larger-scale needs, indexing is split out from the data input function and sometimes from the search management function as well. In these larger, distributed deployments, the indexer might reside on its own machine and handle only indexing,, you need to install and configure three types of components: - Indexers - Forwarders (typically, universal forwarders) - Search head(s) Install and configure the indexers By default, all full Splunk Enterprise instances serve as indexers. For horizontal scaling, you can install multiple indexers on separate machines. To learn how to install a Splunk Enterprise instance, read the Installation Manual. Then return to this manual for information on configuring each individual indexer to meet the needs of your specific deployment. Install and configure the forwarders A typical distributed deployment has a large number of forwarders feeding data to a few indexers. For most forwarding purposes, the universal forwarder is the best choice. The universal forwarder is a separate downloadable from the full Splunk Enterprise instance. To learn how to install and configure forwarders, read Forwarding Data. Install and configure the search head(s) You can install one or more search heads to handle your distributed search needs. Search heads are just full Splunk Enterprise instances that have been specially configured. To learn how to configure a search head, read Distributed Search. Other deployment tasks You need to configure Splunk Enterprise licensing by designating a license master. See the chapter Configure Splunk Enterprise licenses in the Admin Manual for more information. You can use the Splunk Enterprise deployment server to simplify the job of updating the deployment components. For details on how to configure a deployment server, see!
https://docs.splunk.com/Documentation/Splunk/7.0.6/Indexer/Advancedindexingstrategy
2021-07-24T01:11:21
CC-MAIN-2021-31
1627046150067.87
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Überblick). Each Data Service Unit grants a certain amount of data storage, attachment storage, and a maximum number of API calls per day. Community-Lizenz A Community license has a fixed Data Service Unit assigned which can only be enabled for a single tenant. Below is an overview of what is included Enterprise (including Enterprise Trial) Your Automation Cloud account is granted one Data Service Unit for each of the following licenses: - Attended - Citizen Developer - RPA-Entwickler - RPA Developer Pro - Process Mining Developer - Unattended-Roboter. Provisioning Data Service Tenants Wichtig The license type for a Data Service tenant is the license type of the account at the time of the tenant creation. When upgrading a Community account to an Enterprise (or Enterprise Trial) account, your existing Data Service tenant is restricted to the community storage and API call limits. To leverage the storage and API calls available to your Enterprise (or Enterprise Trial) account, you need to create a new Data Service tenant. License Workflow This is how the license structure works: - Ihr Automation Cloud-Konto verfügt über unterschiedliche Lizenzierungsmengenparameter für jeden Dienst, der mit der Plattform verbunden werden kann. - This means that, based on those quantity parameters, you can add: - der Mandant deaktiviert wird; - der Mandant gelöscht wird; - Data Service is removed from an existing tenant. Updated 22 days ago
https://docs.uipath.com/data-service/lang-de/docs/license-allocation
2021-07-24T02:22:32
CC-MAIN-2021-31
1627046150067.87
[]
docs.uipath.com
How to add a Custom Domain Follow the steps below to add & connect your domain with Swipe Pages. The number of domains you can add may be limited by your plan. - Go to the Domains Panel. - Enter your domain - Choose a sub-domain where you want your pages to be hosted. You can add multiple sub-domains to a single parent domain, but the whole process has to be done separately each time. - This step involves making DNS changes. Copy the information from the box shown, login to your domain registrar account, go to the DNS settings page and add a CNAME record with the same name as the Sub-Domain you added in the previous step and paste the value you copied. Why is this required? You as the owner of the domain are saying that traffic coming into a particular sub-domain needs to point to the pages you created in Swipe Pages. It also helps us verify that you own the domain. - Click on the Connect button, and we will verify if the DNS changes you made have gotten updated. If the verification is successful you can see that status changed to “CONNECTED” and you will now be able to publish pages to your domain. NOTE: The time taken for DNS propagation depends on your domain registrar and it can even take anywhere between 24-48 hours.
https://docs.swipepages.com/article/8-add-your-domain
2021-07-24T02:07:41
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f1187262c7d3a10cbaae010/file-QOKVGfAIj6.png', None], dtype=object) ]
docs.swipepages.com
Services for Experts For those fortunate enough to have linux expertise, and for whom the need to move quickly and freely are paramount, a technical consensus on campus is emerging surround the use of `git` to drive linux configuration management. In a nutshell, `git` allows *anyone* to take (in this case) config data, and create, without any further assistance, a "branch" which they can modify or replace as they see fit. These branches are tracked in git, and can be "pulled" back into the main configuration *if desired*. For those that have the expertise ![Image showing how kit can branch from][git] [git]: images/gitbranches.png “” By managing their own independent Puppet “control” repo, groups on campus can leverage our infrastructure while maintaining absolute and complete independence. Campus can also utilize the infrastructure provided and maintained centrally, but essentially run their own custom environments by managing their own branches. If the central environment moves too slowly for any campus group, but they have the expertise to move ahead, that work need not be lost to the campus at large, as it can be pulled back into the main branch later. There are many ways that a central support group could conceivably assist such groups, even if they don’t need or want all services offered: In-House admins: Completely disconnected. - Installation media on the RHEL Satellite - Local mirros of common software on OIT Mirrors - Full access, including “web hooks” to OIT’s git repos In-House admins: Using OIT PXE servers only - Any desired bits from the previous category - Config for central CA, use central Foreman UI - Networking, firewalls, and HA all handled by OIT - Central CA provides secure (overseen by SnC) certificates with easier workflow. - Central CA means movement to any other group using the same CA is trivial. - Completely independent Puppet/Ansible/Chef/… - Completely independent git repos Any work to monitor, load balance or scale the config management system of choice would be done by the in-house group, not centrally. In-House admins: OIT PXE + Puppet servers - Any desired bits from the previous categories. - Completely independent git branch holding all configs - Git “webhook” informs central Puppet Servers when config changes made If you use the OIT Puppet servers, then monitoring Load balancing / Fault tolerance provided is provided centrally. In-House admins: Everything until something fails It’s important to point out, that at any time someone using the “just something that works” model can simply branch off and do their own thing. So, if you’re using the “just works” model, and discover in the middle of the night on a holiday weekend that you need a special config, you can create a branch, and implement and ship whatever is needed at that time. If desired, you can work with CSI to have the centrally provided config support your scenario, and merge back into the main branch. This model allows everyone on campus to benefit from expertise that is currently only available to those that can afford it.Edit me
https://csi-docs.oit.ncsu.edu/services-for-experts.html
2021-07-24T02:05:44
CC-MAIN-2021-31
1627046150067.87
[array(['images/gitbranches.png', 'Image showing how kit can branch from https://imgur.com/gallery/YG8In8X'], dtype=object) ]
csi-docs.oit.ncsu.edu
NetFlow Optimizer send data over UDP protocol in syslog or JSON format which makes it easy to ingest into Elasticsearch, using Filebeat or Logstash or both. Important: configure NFO output format as JSON Filebeat has a small footprint and enables you to ship your flow data to Elasticsearch securely and reliably. Please note that Filebeat cannot add calculated fields at index time, and Logstash can be used with Filebeat if this is required. The steps below describe NFO -> Filebeat -> Elasticsearch - Kibana scenario. Make sure all services are running Download nfo_fields.yml and add the nfo_fields.yml file to the filebeats configuration directory (e.g /etc/filebeat). This file contains the NFO field definitions for the template In the filebeats configuration directory edit the filebeat.yml file - Add after the ‘filebeat.inputs:’ line the following: - type: udpmax_message_size: 10KiBhost: "0.0.0.0:5514" where 5514 is the filebeat input port (it should match NFO UDP output port) 3. After the ‘setup.template.settings:’ line add the following: setup.template.enabled: truesetup.template.name: "nfo"setup.template.pattern: "nfo-*"setup.template.fields: "nfo_fields.yml"setup.template.overwrite: true#if ilm.enabled is set to false then in the outputs a custom index name can be specifiedsetup.ilm.enabled: false 4. In the ‘output.elasticsearch:’ section add the index filename, for example: Output.elasticsearch:# Array of hosts to connect to.hosts: ["localhost:9200"]index: "nfo-%{+yyyy.MM.dd}" 5. In the processors section add this: - decode_json_fields:fields: ["message"]target: ""overwrite_keys: true 6. Restart filebeat service In kibana.yml set host and port In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns Define the new index pattern Logstash has a larger footprint, but enables you to filter and transform data, adding calculated fields at index time, if necessary. Make sure all services are running Download and add the following files to the logstash configurations conf.d directory (e.g. /etc/logstash/conf.d): nfo_mapping.json nfo.conf Modify input and output sections of nfo.conf to match your environment: input {udp {port => 5514codec => json}} where 5514 is the logstash input port (it should match NFO UDP output port) output {elasticsearch {hosts => ["http://<elasticsearch host IP>:9200"]index => "nfo-logstash-%{+yyyy.MM.dd}"template => "/etc/logstash/conf.d/nfo_mapping.json"template_name => "nfo*"}} 3. Restart Logstash service In kibana.yml set host and port In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns Define the new index pattern
https://docs.netflowlogic.com/integrations-and-apps/integration-with-elastic
2021-07-24T01:07:50
CC-MAIN-2021-31
1627046150067.87
[]
docs.netflowlogic.com
Localization The system automatically displays extension messages in the language which Plesk currently uses if the extension has the corresponding translations. For example, if a customer uses the web interface in German, the extension also has the option to display its messages in German. Extension messages are categorized into the following groups: - Meta information. These messages are kept in the meta.xmlfile in the extension's root directory. To provide a translation of a node, specify the same node with a certain xml:lang attribute with a 4-letter language code conforming to RFC1766. For example: <description>Easily add customers and websites</description> <description xml:Einfache Neukundenerfassung & simple Website-Erstellung</description> - Other messages. These messages are a part of the extension code, for example, form input captions, page titles, action names, etc. Next in this chapter you will learn how to help the system discover the translations to display them to Plesk users. The interface that allows for automatic message translation is called pm_Locale and it is presented by the corresponding class. To use this interface, you need: - Design the placeholders (keys) you will use instead of actual values in the GUI messages. For example, to substitute a button caption Accept, you may use acceptButtonText. A key can be an arbitrary string. - Use pm_Locale::lmsg(key) instead of an actual string in the extension code. Here key can be acceptButtonText we obtained at step 1. - Add language-dependent values for each key. The key-value pairs must be included in the $messages associative array, for example: <?php $messages = array( '<em>acceptButtonText</em>' => 'Accept', ); A file which includes the $messages array must reside in <product-root> /plib/modules/<module ID>/resources/locales/ and its name must be equal to the corresponding 4-letter locale code. For example: de-DE.php. All messages corresponding to a certain language must be stored in a language-specific file. For example, <product-root>/plib/modules/<module ID>/resources/locales/en-US.php: <?php $messages = array( '<em>acceptButtonText</em>' => 'Accept', '<em>cancelButtonText</em>' => 'Cancel', ); <product-root>/plib/modules/<module ID>/resources/locales/de-DE.php: <?php $messages = array( '<em>acceptButtonText</em>' => 'Accept', '<em>cancelButtonText</em>' => 'Stornieren', ); To retrieve the currently used interface language, you can use the getCode method. The following example shows the contents of an English locale file for a module with the code panel-news, which is located in the file /usr/local/psa/admin/plib/modules/panel-news/resources/locales/en-US.php. <?php $messages = array( 'blockTitle' => 'Plesk News', 'buttonTitle' => 'View Article', );
https://docs.plesk.com/en-US/12.5/extensions-guide/localization.71107/
2021-07-24T01:27:41
CC-MAIN-2021-31
1627046150067.87
[]
docs.plesk.com
Nimble Builder and website performance 🚀 Nimble Builder is designed to load fast, in particular on mobile devices. Even when using basic cache server configuration, the plugin can help you get an A grade on a performance test like GT Metrix. Web pages built with Nimble Builder can include commonly heavy assets without degrading performances too much. Like Google fonts, Font Awesome icons, sliders, images and video background, all of which usually have a negative impact on page load time. To achieve this, Nimble Builder optimizes the critical rendering path by deferring the loading of non-critical assets after the page is painted by the browser. The plugin includes a set of straight forward performance options allowing you to take control of many parameters impacting page load performances. Those options are located in the customizer > Nimble Builder > Site Wide Options. Advanced object cache For users of cache plugins allowing object cache, Nimble Builder allows you to take fully advantage of this cache by adding the following PHP constant to your wp-config.php file : define( 'NIMBLE_OBJECT_CACHE_ENABLED', true ); When Nimble Builder detects this PHP constant, it will add all the content created with Nimble Builder to WP object cache, making it instantly available for a persistent object cache plugin. Use case : nimblebuilder.com The following screenshots show results with GT Metrix, Pingdom, and Google page speed tool on the home page of nimblebuilder.com ( built with Nimble Builder of course ⭐️). At the time of testing, the page included a slider, a video background, Google fonts, Font Awesome icons and many images. The tested page uses the latest default WP theme. Since this page uses a Nimble Builder header and footer, we have removed ( WP developers would say dequeued ) the unessessary theme assets like the main stylesheet and some javascript files. To leverage server caching, we use the free W3 Total Cache plugin, with basic settings for page cache, minification, browser compression. Featured image credits : wikimedia.org
https://docs.presscustomizr.com/article/414-nimble-builder-and-website-performances
2021-07-24T01:07:14
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5eac602f042863474d19ff46/file-6Lmpg0FQiH.png', 'Nimble Builder and website performance'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5fc75024eb7cc612aa353f9f/file-iQCp77dLaL.jpg', 'Nimble Builder performances'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ec6bc062c7d3a5ea54b882b/file-SJpab6bNEK.png', 'Nimble Builder GT metrix report'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ec6bc16042863474d1b24e5/file-fCReRN0vsF.png', 'Nimble Builder pingdom report'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5eac5bbd2c7d3a5ea54a4f6d/file-Sj8TEUyKXb.png', 'Nimble Builder performance report on Google audit tool'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5eac5bcf042863474d19ff11/file-8O357H9eky.png', 'Nimble Builder performance report on Google Speed Test tool'], dtype=object) ]
docs.presscustomizr.com
Token field;. Authentication Failed Over HTTPS If the password or username were mistyped when cloning a GIT repository by using the HTTPS method, the Enter Credentials window is displayed with the message Authentication failed: Please check whether the User and Password you entered are correct. Over SSH The following Enter Credentials window is displayed when authentication fails over SSH: Please check your Private Key and password are correct and try again. Cloning a Remote GIT Repository -. -. The window is also available from the status bar. -. - newly added files. Note: The Undo does not cover unversioned files. If you create new files and then select Undo, the files are not removed from the project. Once added to the project tree, new files remain there unless they are manually deleted.. -. -. . . Using GIT with a Proxy Server GIT integration in Studio supports accessing remote repositories if internet access is through a proxy server. This can be done in two ways: either configured at machine level in the Proxy Settings window or by making changes to git commands. Proxy details configured in the Proxy Settings window are taken into account, without the need of entering them in the .gitconfig file. To configure proxy details with git commands, add them to GIT configuration files in the following form: [http ""] proxy = GIT configuration files can be found at the following locations: configfile: %ProgramData%\Git .gitconfigfile: %UserProfile% - local configfile from project level, for example %UserProfile%\Desktop\testproject\.git. Updated 5 months ago
https://docs.uipath.com/studio/v2020.4/docs/managing-projects-git
2021-07-24T02:19:14
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/fa12a5d-git_authentication.png', 'git_authentication.png'], dtype=object) array(['https://files.readme.io/fa12a5d-git_authentication.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/bb1766f-ssh.png', 'ssh.png'], dtype=object) array(['https://files.readme.io/bb1766f-ssh.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6dd249a-failed.png', 'failed.png'], dtype=object) array(['https://files.readme.io/6dd249a-failed.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/60a61be-failed_token.png', 'failed_token.png'], dtype=object) array(['https://files.readme.io/60a61be-failed_token.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8c05307-git_commit.png', 'git_commit.png'], dtype=object) array(['https://files.readme.io/8c05307-git_commit.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/fc9e2e8-remote_repo.png', 'remote_repo.png'], dtype=object) array(['https://files.readme.io/fc9e2e8-remote_repo.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/262986d-git_outofdate.png', 'git_outofdate.png'], dtype=object) array(['https://files.readme.io/262986d-git_outofdate.png', 'Click to close...'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/2019.5_Studio/changes.png', 'changes_icon'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/2019.5_Studio/push.png', 'push_commit'], dtype=object) array(['https://files.readme.io/ffe55bc-git_commit.png', 'git_commit.png'], dtype=object) array(['https://files.readme.io/ffe55bc-git_commit.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8aac6e4-git_undo.png', 'git_undo.png'], dtype=object) array(['https://files.readme.io/8aac6e4-git_undo.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d2db690-git_copy.png', 'git_copy.png'], dtype=object) array(['https://files.readme.io/d2db690-git_copy.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3787ed3-git_manage.png', 'git_manage.png'], dtype=object) array(['https://files.readme.io/3787ed3-git_manage.png', 'Click to close...'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/2019.5_Studio/manage_branches.png', 'branches'], dtype=object) array(['https://files.readme.io/a7364b2-solve_conflicts.png', 'solve_conflicts.png'], dtype=object) array(['https://files.readme.io/a7364b2-solve_conflicts.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
MeshInstance2D¶ Inherits: Node2D < CanvasItem < Node < Object Node used for displaying a Mesh in 2D. Description¶ Node used for displaying a Mesh in 2D. Can be constructed from an existing Sprite via a tool in the editor toolbar. Select "Sprite" then "Convert to Mesh2D", select settings in popup and press "Create Mesh2D". Tutorials¶ Property Descriptions¶ The Mesh that will be drawn by the Mesh.
https://docs.godotengine.org/fr/latest/classes/class_meshinstance2d.html
2021-07-24T01:40:09
CC-MAIN-2021-31
1627046150067.87
[]
docs.godotengine.org
Crash-consistent Snapshot copies Contributors Download PDF of this page You might have to create a crash-consistent Snapshot copies of your file system or disk groups. SnapDrive for UNIX creates Snapshot copies that contain the image of all the storage system volumes specified in the entity. When you create a Snapshot copy of a storage entity, such as a file system or disk group, SnapDrive for UNIX creates a Snapshot copy that contains the image of all the storage system volumes that comprise the entity you specified using a file_spec argument. The file_spec argument specifies the storage entity, such as the file system, LUN, or NFS directory tree that SnapDrive for UNIX uses to create the Snapshot copy. SnapDrive for UNIX makes consistent storage components that compose the entity you requested in the Snapshot copy. This means that LUNs or directories being used outside those specified by the snapdrive snap create command file_spec argument might not have consistent images in the Snapshot copy. SnapDrive for UNIX enables you to restore only the entities specified by the file_spec argument that are consistent in the Snapshot copy. Snapshot copies of entities contained on a single storage system volume are always crash-consistent. SnapDrive for UNIX takes special steps to ensure that Snapshot copies that span multiple storage systems or storage system volumes are also crash-consistent. The method that SnapDrive for UNIX uses to ensure crash consistency depends on the Data ONTAP version used where the storage entities in your Snapshot copy resides.
https://docs.netapp.com/us-en/snapdrive-unix/solaris/concept_crash_consistent_snapshot_copies.html
2021-07-24T01:24:01
CC-MAIN-2021-31
1627046150067.87
[]
docs.netapp.com
datalad.api.clone¶ datalad.api. clone(source, path=None, dataset=None, description=None, reckless=False, alt_sources=None)¶ Obtain a dataset copy from a URL or local source (path) The purpose of this command is to obtain a new clone (copy) of a dataset and place it into a not-yet-existing or empty directory. As such clone provides a strict subset of the functionality offered by install. Only a single dataset can be obtained, recursion is not supported. However, once installed, arbitrary dataset components can be obtained via a subsequent get command. Primary differences over a direct git clone call are 1) the automatic initialization of a dataset annex (pure Git repositories are equally supported); 2) automatic registration of the newly obtained dataset as a subdataset (submodule), if a parent dataset is specified; 3) support for datalad’s resource identifiers and automatic generation of alternative access URL for common cases (such as appending ‘.git’ to the URL in case the accessing the base URL failed); and 4) ability to take additional alternative source locations as an argument.
https://datalad.readthedocs.io/en/stable/generated/datalad.api.clone.html
2019-02-16T02:49:50
CC-MAIN-2019-09
1550247479838.37
[]
datalad.readthedocs.io
Using.
https://docs.toonboom.com/help/harmony-12-2/premium-server/3d-space/complex-sets-production_scenes_.html
2019-02-16T04:08:12
CC-MAIN-2019-09
1550247479838.37
[array(['../Resources/Images/HAR/Stage/3D_Space/anp_castleshotremovewall.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/3D_Space/anp_castleshotremovewall1.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/3D_Space/anp_castleshotremovewall2.png', None], dtype=object) ]
docs.toonboom.com
Tracker (Legacy)¶ Attention Deprecation notice Tracker v3 are deprected. There is no longer functional update on them. Only security bugs might be fixed when possible. It’s highly recommended to switch to Tracker V5 Trackers and real-time Reports Disclaimer: this chapter covers the legacy version (aka v3) of Tracker service. For documentation on the current tracker system (v5) see Trackers and real-time Reports. The Tuleap Tracker is one of the most powerful and versatile services offered to Tuleap hosted projects. It allows for the tracking of various artifacts like bugs, tasks, requirements, etc… and a project can create as many trackers as necessary. All trackers, whether predefined at the site level or created by a project can be fully customized. or other such type. Defining a tracker is just a matter of assigning it a name, choosing the fields that are going to be used in the tracker, and what values will be allowed in those fields. In addition to the project definable fields and field values there are other fields that are permanently attached to a tracker artifact. These are: - Follow-up comments: all artifacts have the full list of free text comments posted by users attached to it. - File attachment: all artifacts can have any number of files attached to it. File attachments generally contains supplementary information that help better characterize the nature of the artifact. - CC list: any number of users can be notified of modification to an artifact by including their Tuleap user name or email address in the CC list. Entering the Tracker Service¶ To enter the Tracker service of a given project, first go to the project and click on the “Trackers v3”. New Artifact Submission¶ To submit a new artifact to a given project you must first access the appropriate tracker of that project as indicated in the section above (see Entering the Tracker Service). When entering a given tracker you are presented with the artifact selection and browsing screen (more about this facility in Artifact Browsing). For now let’s click on the “Submit a New Artifact” item (or whatever the artifact name is) from the Tracker Menu Bar in the upper part of the welcome screen. Document Mgr”.. In any case don’t forget to click on the “Submit” button when you are finished ! Tip About to submit a bug or a support request to a Tuleap Project? Before you do that, make sure that others haven’t yet submitted a similar artifact. To do so you can either browse the artifact database through the Artifact Selection and Browsing facility or you can use the search box in the Tuleap Main Menu and search by keywords. Artifact Browsing¶ Tuleap offers the ability to browse the artifact database according to a variable set of criteria. Selection Criteria¶ The upper part of the artifact browsing screen is devoted to the selection criteria. You can select bugs by Category (the module in which the bug occurred), Group (nature of the bug like Crash, Documentation Typo, …), Status (e.g. Open, Closed, …) and Assignee (the person in charge of the bug). Other trackers may show more, less or different selection fields depending on the configuration put in place by the tracker administrators. How selection criteria are filled out depend on their field type. The Tracker Service currently has several the following types of fields used as search criteria: Select Box Field¶ A select box field can take its value in a set of predefined values. Multiple Select Box Field¶ A multiple select box field takes it’s value from a set of predefined values. While the select box field introduced above only allows one to select only a single field value, the multiple select box field allows the user to select multiple values for the same field. In search mode it behaves exactly like the simple select box:: 1999-03-21 is March 21st, 1999, 2002-12-05 is Dec 5th, 2002.. Favorites and Predefined Tracker Queries¶ Tip If you often run the same queries against a tracker with the same set of selection criteria, it is probably a good idea to save this query for later re-use. To do this: select the appropriate tracker report, then choose your search criteria, click on the “Browse” button to run the query. Finally click on the “Bookmark this Page” item in the Tuleap Main Menu. A new bookmark will show up in your Personal Page. A click on this bookmark will run the exact same query again. Your favorite queries can be saved via the Tuleap bookmark mechanism as explained in the Tip box but there are also shortcuts in the Tracker Menu Bar for the most common queries. They are: - Open Artifacts: display all the artifacts that are not yet closed for this project. - My Artifacts: display the artifacts assigned to you (based on the Tuleap account you are currently using) Also notice that Tuleap always keeps track of the last run query. Next time you enter the tracker welcome screen, Tuleap will use the same set of selection criteria in the selection fields and display the list of matching artifacts accordingly. Advanced Search Mode¶ At any time during the search phase, you can toggle the search mode from Simple to Advanced and vice-versa (see the Advanced Search link). The Advance Search mode allows you to select multiple values for each selection criteria. Using this mode you could search for both open and suspended bugs assigned to project members A and B. Tracker Search Results¶ Based on your selection of search criteria, Tuleap runs a query against the tracker database, selects the matching artifacts, and displays them right below the selection criteria. Columns displayed in the artifact list are entirely configurable by the project team (see Tracker Reports). Artifact severity is color coded. Colors associated with severity levels may vary from one Tuleap site to another and it is therefore shown at the bottom of the list of results generated by the search. Finally, corresponding “Artifact ID”. Artifact List Sorting¶ By default, artifacts are sorted by ID which happens to be the chronological order in which they have been submitted and stored in the Tuleap database.. One exception to this rule is for sorting by Severity. Severity being shown as a color code and not as a column per se, there is a special link at the bottom of the screen to sort the list of results by Severity. For more sophisticated sorting you can also activate. Note Note:Sorting criteria, like selection criteria, are also saved into your preferences and the same sorting criterion is re-used in subsequent queries. Export Tracker Search Results¶ At the bottom of the Search Result screen you have a button to export all artifacts of your search result into CSV format. Using this facility you can easily select the tracker artifacts that you want to process with other tools outside Tuleap. Printer Friendly Version¶ At any point in the process of browsing the tracker database you can click on the “Printer Version” link to display a simplified and non-decorated artifact list that prints nicely or can be copy-pasted in a document of your choice. For better readability we strongly advise you to print the list of results in landscape format. Graphical visualization¶ You can also view graphical results of your search in the ‘Charts’ section. There si basely three types of graph supported : Pie, Bar and Gantt. Tracker Reports¶ Tracker reports allow for the definition of a specific layout of the artifact search and browsing screen where one can choose the selection criteria and the columns used in the list of matching artifacts. Depending on the project, users her own personal report. In this case the report is a personal one and is only visible to this particular user. On the contrary, tracker administrators have the ability to define project-wide reports that all users will be able to use. See Tracker Report Management for more details on managing tracker reports. Graphical Tracker Reports¶ There is also a report system for the graphical visualization service. Depending on the project, users may enjoy the ability to choose from several graphical tracker reports by using the upper pull-down menu of the ‘Charts’ section Any Tuleap user with access to the tracker can define her own personal graphical report. In this case the report is a personal one and is only visible to this particular user. On the contrary, tracker administrators have the ability to define project-wide graphical reports that all users will be able to use. See Tracker Graphical Report Setting for more details on managing tracker reports. Artifact Update¶ Selecting an artifact ID from the list generated by a search operation will bring you to a screen with all the artifact details. Depending on the permissions you have on this tracker (see Field Permissions Management),, add themselves in the CC list or attach new files to the artifact. The Artifact Update screen is divided in several parts: Header, Comments, CC List, Artifact Attachments, Dependencies and History. Header¶ The header zone is where you’ll find all the fields associated with an artifact. As shown on Header of artifact update screen (artifact fields),. (see Header of artifact update screen (artifact fields) for more details on the Tracker configuration). Header of artifact update screen (artifact fields) CC List¶ As explained later in this chapter (see E-mail Notification) Tracker offers a powerful email notification system for those users who, at some point, were involved in the life of the artifact whether as a submitter, an assignee or as a person who posted a follow-up comment (commenter). Sometimes it is however helpful to involve other people in the email notification process even if they did not play an explicit role in the life of the artifact so far. For instance, you may want a QA contact or the originator of the artifact when different from the submitter to receive a carbon-copy (CC) of the email notifications. This is precisely what the CC List is intended for. Inserting CC names in the CC list will allow these people to receive updates notifications for this specific artifact. CC Names¶ The CC names can be either email addresses or a Tuleap login name if the user has a Tuleap account. - Tuleap login name: when the person you want involve in the notification process has a Tuleap account use it in place of her email address. Using the Tuleap login name give to the recipient the ability to customize the kind of update events they want to receive. For more information on how to customize notification preferences for a given project see Email Notification Settings. - Email Address: there is no restriction on the type of email address you can type. It can be either individuals or mailing list - see Mailing Lists. Unlike CC names entered as login names, CC names added in the form of email addresses have no customization capabilities and receive all bug updates. Adding and Deleting CC Names¶ Several CC names can be added at once by separating them with commas or semi-column in the “Add CC” field. Using the comment field, one can also explain why these CC names were added and/or who they are. CC names addition and deletion is subject to a number of permission rules: - Adding a CC name: Anonymous users cannot add CC names. Any other Tuleap user who is logged in can add CC names and the CC list will clearly show who added this entry and when. - Deleting a CC name: users with Tracker Administrator permissions on a given tracker can delete any entry in the CC list for any artifact of this tracker. All other users can delete CC entries that were either added by themselves or entries where the CC name matches their own name or email address in any Tuleap projects. In other words a Tuleap user has the right to undo what they have done or remove themselves from a CC list in any tracker. Artifact Attachments¶ In addition to comments, the Tuleap Tracker allows you to attach virtually any piece of information to an artifact in the form of a file. Typical examples of artifact attachments are application screen shots in PNG, GIF, JPEG or whatever image format is appropriate; it can also be core dumps, a binary image of program that crashed or even a simple text file showing a stack trace or an error message. Artifact attachments can be of any type (image, video, sound, text, binary…) and a comment field can be optionally used to annotate the attachment. The maximum size of a file attachment is site dependent. The default is 2 MByte. Artifact Dependencies¶ The next section on the artifact update screen deals with artifact dependencies . Users have the ability to establish a dependency link from an artifact to one or several other artifacts belonging to any of the tracker of any Tuleap project. This is made possible by the fact that artifacts have a unique ID across the entire Tuleap system. The Tuleap system does not impose any semantic on the nature of these dependency links. As a project team, you are free to agree on a specific meaning for these links. It can be a cause-effect type of relationship, a duplication of information or a time dependency for a task tracker. To create an artifact dependency, type one or several artifact IDs (comma separated) and submit the form. The cross-referenced artifacts appear in a table right below the input field showing their description as well as the tracker and the project they belong to. To delete an artifact dependency simply click on the wastebasket icon to the right of the artifact description line and confirm or cancel when asked by the dialog box. The dependency section shows the artifact dependencies in both ways: it shows the list of artifact(s) the displayed artifact depends on but also the list of artifacts that depend upon the one you are browsing. Artifact Cross-Referencing¶ In addition to the rather formal way of expressing a dependency between two artifacts presented earlier, Tuleap offers. Permissions on artifacts¶ Tracker admins can restrict access to artifact. Those permissions are a complement to the permissions defined at tracker level. The tracker admin just has to edit the artifact and update permissions like in the example below (where the artifact is currently restricted to project admins and members). Artifact History¶ The last part of the artifact update screen is devoted to the artifact history. The artifact history keeps track of all the changes that occurred on all artifact fields since the creation of the artifact. The artifact history shows what fields changed, what the old value was before the change took place, who changed it and when. Artifact Mass Change¶ Tuleap provides project and tracker administrators with the possibility to update several artifacts in one step: delete/add the same CC name entry or file attachment to a set of artifacts, assign a list of artifacts to a person, etc… A typical application of the mass update feature is when a person leaves a project and all the artifacts that are assigned to her have to be re-affected to another person. Selection Criteria for Mass Change¶ The artifacts to be updated can be selected according to a set of criteria. These criteria are the same as for artifact browsing. For fine-grained control you may also select individually all artifacts concerned by the mass change. Update¶ Once you have selected all the artifacts to be updated you can now proceed to affect these changes via the Update screen. The Update screen for the Mass Change is very similar to the normal Artifact Update screen. It is divided into the following parts: Header, Comments, CC List, Artifact Attachments, and Dependencies. In the Header zone you find all the fields associated to the artifact. Only those that are changed from Unchanged to a defined value will be taken into account for the update. The CC List zone differs from the normal Artifact CC List zone in that it contains all the CC names of the selected artifacts with the information of in how many artifacts a CC name is used. If you add a CC Name it will be added to all the three artifacts. Equally, the Attachment zone contains any files attached to the selected artifacts with the information as to how many of those artifacts each file is attached to. The Dependencies zone is structured in the same manner. Each mass change is tracked in the project history (Project History). On the other hand, no e-mail notification is sent in response to the mass change. Artifact Duplication¶ If artifact duplication is allowed for the tracker (see section General Configuration Settings), project members can duplicate an artifact. To duplicate an artifact, select an existing artifact (as though you want to update it) and click the “Copy this artifact” link. Then, you are in an artifact submission screen, with all the values of the duplicated artifact. As summary must be unique, a copy information is just appended to the original summary. By default, a follow-up comment is pre-filled with an indication of the duplication, and a dependent is also pre-filled with a reference to the original artifact. Of course, you are free to modify the values of the duplicated artifact. Only project members are allowed to duplicate artifacts., a new attachment or a change in any of the artifact fields - an e-mail message is sent to the following actors: - The artifact submitter (the person who initially submitted the artifact) - The artifact assignee (the project member to whom the artifact is currently assigned) - The people on the CC list if any (the persons who are listed in the CC list of a given artifact) - All users who posted at least one follow-up comment to the artifact. Beside these simple rules, the Administration module of the Tuleap Tracker allows Tuleap users to customize the email notification process. For further information see Email Notification Settings). browsing screens. Exporting Excel Sheets in CSV Format¶ To export an Excel sheet to CSV format, simply follow the steps below: File -> Save As - In the dialog window choose CSVas the Save as type CSV File Parsing¶ The CSV format that is accepted as import input is accessible over the CSV file submission screen. This page allows manual validation of the tracker field labels, labels a artifact row - Unknown tracker field label - Field values that do not correspond to the predefined field values of a (multi) select box field - Double submission (i.e. submission of a new artifact with exactly the same summary as an existing artifact) - Unknown artifact identifier - Remove already submitted follow-up comments All other potential errors have to be checked manually by looking at the parse report table. The Database Update¶ If you import new artifacts, all non-mandatory fields that are omitted in the CSV file will be initialized to their default value. If you want to update the CC list or dependencies list of an existing artifact, be aware that the import will delete all former CC names or dependencies of the artifact and put the CC names or dependencies from the import file instead. All follow-up comments in the csv file that had already been posted are removed to avoid double submission.. Tip: Table: Default Tracker Access Permissions Tracker Creation¶ Before one can define what fields and field values to use in a tracker it must first be created. Tracker creation can be accessed from the “Create a New Tracker” menu item that is available either in the public part of the tracker or in the Tracker Administration section. Tip When a new project is created on Tuleap a number of trackers are automatically created for this project. This would typically be a bug tracker, a task tracker and a support request tracker. If your project manages this type of artifact please use the predefined trackers first. Of course, you are free to define new fields or customize existing ones in each of the trackers. To define a new tracker you must provide the following information (see Creation of a new tracker (here a defect tracking system)): -_1<< Creation of a new tracker (here a defect tracking system)) or project specific templates. Remark: using a template doesn’t mean you have to stick to the list of fields and field values defined in this template. You can always add or remove fields or fine-tune the field settings afterwards. Tuleap-wide Template: Tuleap-wide. Tuleap-wide Tracker Templates¶ The standard trackers provided for each new Tuleap project are: - Bugs - Patch - Support Requests - Tasks - Scrum Backlog Each of those templates have predefined fields that correspond to the specific work processes around bugs, patches Patch Tracker Template¶ The role of the Patch tracker is to let non project members or project members with restricted permissions to contribute source code modifications to the project. On how to generate source code patches see the CVS chapter (Contributing your Changes (other users)) or the Subversion chapter (Contributing your Changes (for other users)). Note A note to the project team Receiving source code modifications or other contributions from other Tuleap users does not imply that you have to accept the new code and insert it in your main source tree. It is up to the project team to decide what to do with it. One of the interesting features of the Patch tracker is that submitted patches are available to anybody at all time regardless of the final decision of the project team. Therefore any Tuleap visitor is free to download any submitted patch and apply it onto its own version of the software even if the project team has decided not to apply the patch on the main source tree. The Support Request Tracker Template¶ The Support Request (SR) tracker is one of the communication mechanisms that your project should offer to the project community. It allows any Tuleap user to ask question to the project team and call for assistance. Tuleap users who have access to the tracker can follow the thread of discussions between the other users and the project team. It is also possible to review all the SRs that were posted in the past and the answer given by the project team. With the Support Request tracker, a project team can easily and efficiently coordinate technical support activities. Scrum Backlog Template¶ Codendi makes it easy to implement the Scrum methodology, by providing a Scrum Backlog tracker to each project. You will find a comprehensive description of Scrum on Wikipedia. The Scrum Backlog tracker contains artifacts called “User Stories”, that describe needs expressed by the customers of the project. The tracker has been customized to capture customer requirements: it is possible to define the customer value of each story, its acceptance criteria, its effort, its current backlog (Product Backlog or Sprint Backlog), etc. Other optional fields are available, and of course, each project may customize the tracker to fit the way it implements the methodology At the beginning of a Scrum project, each customer user story must be stored in the Product Backlog (‘Backlog’ field of the tracker). During the first Sprint Meeting, a few stories are selected by the team to be implemented in the first iteration. They are moved to the “Sprint Backlog” (‘Backlog’ field), and evaluated (‘Initial Effort’), or even duplicated into smaller stories.. Tracker Administration¶ As we went through the description of the Tuleap Tracker features, we referred several times to the flexibility of this system and how easy it is to customize your own tracker. This can be done through the Tracker Administration module available under the “All Trackers Admin” menu item in the Tracker Menu bar. The top level administration screen shows a list of existing trackers for your project. From this page, existing trackers can be configured and new ones can be created (see Tracker Administration - Top Level Page). This section focuses on the configuration of an existing tracker. Creation of new tracker is covered in Tracker Creation. Tracker Administration - Top Level Page The configuration settings for a given tracker is divided in seven sections: - General Settings: name, description and some other general purpose parameters are defined in this section. - Permissions Management: allows you to give different access permissions to different users depending on their role. - Manage Field Sets: this is where you’ll decide what field sets to use in your tracker. - Manage Field Usage: this is where you’ll decide what fields to use in your tracker. - Manage Field Values: this section allows you to define the lists of values to be used by certain fields. - Manage Canned Responses: allows you to create some pre-defined follow-up comments that your team is using on a regular basis. - Manage Reports: search and browsing templates for the artifact search screen are defined here (search criteria et results table). - Email Notification Settings: fine tuning of the global and personal email notification settings.… - Allow artifact duplication: if artifact duplication is allowed or not. If it is, only project members are able to duplicate artifacts. -/Artifact level: at this level, you can define the group of users who have access to only certain artifacts or have no access at all. - and Artifacts. Tip A sample tracker permissions screen As an example of how you can use these permissions let’s assume that you have created a tracker where several of your customers can report defects on your software. In such a situation, you may decide that a customer from a given company should only see those defects that were submitted by its employees and not those submitted by other companies. To achieve this you just need to create a group called ‘company_A’ in which you include the login names of all the users working for company A. Then do the same with the group ‘company_B’ for company B. Those two groups will then be given the ‘access to artifacts submitted by group’ type of permission. In addition you probably want to grant ‘access to all artifacts’ to the project members or to the ‘tracker_administrators’ groups so that your team members can manage artifacts from any customer. In this example: - a user which is not logged-in will not have access to artifacts, - a user which is logged-in will not have access to artifacts, - a project member will have access to all artifacts, - a project admin will have access to all artifacts, since a project admin is a project member, - a tracker admin will have access to all artifacts, since a tracker admin is a project member, - a member of ugroup Company_A will only have access to artifacts submitted by members of the ugroup Company_A (the same for Company_B), - a project member which is also member of ugroup Company_A will have access to all artifacts since he is a project member, - a member of ugroup Company_C will not have access to artifacts (if he is not also member of a ugroup like project_member, Company_A or Company_B). Field Permissions Management¶ Beside defining access permissions for the tracker and its artifacts (see Tracker and Artifacts Sets Management¶ In order to improve the input of the artifact submission form, the fields of the trackers are grouped in field sets. This allows to open up the submission form, or to clusterize fields that have same semantics, or also to group fields that play a particular part (for instance, you could clusterize fields aimed to be filled by the one who is responsible for the diagnosis of the artifact, and then group fields aimed to be filled by the one who is in charge of fixing it, etc.). Each field must belong to a field set, and a single field can only belong to only one field set. Tracker Field Set List¶ To manage the field sets for this tracker, select the item “Manage field sets” on the welcome page of any Tracker Administration screen. The Field Set screen (Field Set screen of a project tracker) shows you a sample list of field sets available in the tracker of a Tuleap project. The screen is divided in 2 parts: the list of tracker field sets currently in use a form to create new tracker field sets. Information displayed on the Tracker Field Set list page are as follows: - Field Set Label: the name of the field. To change the properties of a field set simply click on the field set name. - Description: the field set description - Fields belonging to this field set: list of the fields that belong to this field set. The used fields are displayed in bold, the unused ones in italic. - Rank on screen: the rank number indicates in which order the field sets will appear on the artifact submission form and the artifact update form. Field sets with a smaller rank number are displayed first. - Delete?: when a field set is deleted, it completely disappears from the list of available field sets. Only empty field sets (that means not including any field) can be deleted. Field Set screen of a project tracker Creation and Modification of a Tracker Field Set¶ The forms used for the creation of a new tracker field set or the modification of an existing one being very similar both are covered in the same section. The field set creation form is available at the bottom of the tracker field set list where as the field set update form can be accessed by clicking on the field set label located on the left hand side of the field set list. Properties that can be tuned for a tracker field set are as follows: - Field Set Label: this is the name of the field set. - Description: a longer description of the purpose of this field set. - Rank on screen: this arbitrary number allows you to define the position of this field set on the artifact submission form and the artifact update form relative to other field sets. The field sets with smaller values will appear first on the screen. The rank values doesn’t have to be consecutive values. It is a good idea to use values like 10,20,30,… so that it is easy for you to insert new field sets in the future without having to renumber all the field sets. Field Usage Management¶ When a tracker is first created, it comes pre-configured with a set of fields inherited form the template that was used to create it. For the majority of projects hosted on Tuleap it is very likely that the standard Tracker templates (e.g Bugs, Tasks, Support Requests) will cover most of the needs. However Tuleap gives you the ability to customize the list of fields for your trackers. It can be a variation on an existing template with some field addition or removal or it can be an entirely new tracker created from an empty template. Tracker Field Types¶ The fields of a tracker can be of several types: Select Box, Multi-Select Box, Text Area, Text Field, Integer Field, Float Field and Date Field. Find below a detailed description of each type: - Select Box: a “Select Box” field takes its value from a predefined list of values defined by the tracker administrator. Depending on the browser you use it may be displayed slightly differently but it is generally shown as a pull-down menu with the list of predefined values. At any given time this type of field can only be assigned one single value. - Multi-Select Box: like the Select Box field described above this field takes its value from a predefined list of values. As opposed to the Select Box field, the Multi-Select Box can be given multiple values at once by the end user. As an example, this type of field can be used to assign several persons to a given task in a task tracker. - Text Area: allows the user to enter free text in a multi-line text area. The field “Original Submission” that is used to describe in details a defect, a task, etc. is of type “Text Area”. - Text Field: allows the user to enter free text either in a one-line text field. The summary of a defect or a task is a good example of a one-line text field. - Date Field: one-line field that only accept ISO formatted dates (YYYY-MM-DD) - Integer Field: one-line field that only accept well-formed integral numbers (e.g 3, -100, 2345…) - Float Field: one-line field that only accept well-formed floating point numbers (e.g 3.56, -100.3, 2345, 34E+6…) Tracker Field List¶ To decide what field to use and what field not to use select the item “Manage Field Usage” on the welcome page of any Tracker Administration screen. The Field Usage screen (Field Usage screen of a project tracker) shows you a sample list of fields available in the tracker of a Tuleap project. The screen is divided in 3 parts: the list of tracker fields currently in use the list of unused tracker fields (not shown on Field Usage screen of a project tracker) a form to create new tracker fields (not shown on Field Usage screen of a project tracker) Information displayed on the Tracker Field list page are as follows: - Field Label: the name of the field. To change the properties of a field simply click on the field name. - Type: tracker fields can be of several types: Select Box, Multi-Select Box, Text Field, Text Area, Date Field, Integer Field or Float Field. For a detailed description of the field types see Tracker Field Types. - Description: the field description. - Field Set : field set the field will belong to. - Rank on Screen: the rank number indicates in which order the fields will appear on the artifact submission form and the artifact update form. Fields with a smaller rank number are displayed first. The rank numbers are relatives regarding the field sets. This means that the fields are first displayed by field sets, and then by rank number, inside their own field set. - Status: - Used: the field is used by the tracker. - Unused: the field is not used by your tracker. Note that an unused field is just a “hidden” field. if you change the status of a field from used to unused all the data associated with this field are preserved. - Delete?: when a field is deleted, it completely disappears from the list of available fields. Furthermore all the data associated with this field is destroyed from all artifacts. Field Usage screen of a project tracker Creation and Modification of a Tracker Field¶ The forms used for the creation of a new tracker field or the modification of an existing one being very similar both are covered in the same section. The field creation form is available at the bottom of the tracker field list where as the field update form can be accessed by clicking on the field label located on the left hand side of the field list. Tip At any time in the life of your project you can enrich your trackers with new custom fields. However before you decide to create a new field make sure that there isn’t a predefined field that already plays the same role. By using predefined fields whenever possible, you’ll contribute to keep the global Tuleap environment consistent and make it easier for visitors, contributors or new team members to switch from project to another. Properties that can be tuned for a tracker field are as follows: Field. Field Type:tracker fields can be of several types: Select Box, Multi-Select Box, Text Area, Text Field, Integer Field, Float Field and Date Field. For a detailed description of the various field types see Tracker Field Types. Display Size: this property allows you to define how much space a field is going to take on the screen. It has a different meaning and a different format depending on the field type. Select Box: the display size does not apply to a select box. Any input typed in the display size field will be silently ignored. Multi-Select Box: the display size is made of a single number which indicates how many of the values associated with this field are visible at once. A reasonable value for the size of multi-select box is between 2 and 5. Text Field, Integer Field, Float Field: for all one-line fields, the display size follows the pattern “V/M” where V is the number of character visible at once in the field display window and M is the maximum number of characters that can be typed for this field. If V is less than M then the text will shift in the visible window as more text is entered. The maximum value of M is 255. A display size of “10/40” means a field that accepts 40 characters maximum and the field display is 10 characters in width. Date Field: A date always follows the same pattern (YYYY-MM-DD) and therefore it always has a fixed length of 10 characters. Text Area: for text areas, the display size follows the pattern “C/R” where C is the number of columns in the text area (the width in number of characters) and R is the number of rows or lines of text. Note that the number of lines is not limited to R. If the text typed in the field has more than R lines then a scrollbar will show up to navigate through the text. A display size of 60/7 means a text area with 7 lines that are 60 characters long. Field Set : it is the field set the field will belong to. Each field must belong to a field set, and a field cannot belong to more than one field set (in other words, a field belong to one and only one field set). Rank on screen: this arbitrary number allows you to define the position of this field on the artifact submission form and the artifact update form relative to other fields. The fields with smaller values will appear first on the screen. The rank values doesn’t have to be consecutive values. It is a good idea to use values like 10,20,30,… so that it is easy for you to insert new fields in the future without having to renumber all the fields. Tracker field usage settings Allow Empty Value: determines whether leaving the field blank in the artifact submission or update form is allowed or not. If unchecked the tracker won’t accept the form unless the field is given a value. The fields that must be filled out are marked with a red start on the submission and modification forms. Keep Change History: determines whether changes made to this field will be kept in the artifact history Use this field: This checkbox only appears on the field usage modification screen. When first created a field is automatically given the status “Used” (checkbox marked). Fields becoming unused will simply be hidden from the user view but all data attached to this field in the artifact database remains untouched. In other words, returning a field from unused to used will also restore the field data as they were before. Only the actual deletion of a field destroys the field data (see Tracker Field List). Field Values Management¶ Once fields have been defined for your tracker, the next step is to define a set of values for your fields. This mostly applies to “Select Box” and “Multi-Select Box” type of fields where the list of values you are going to choose will show up in the pull-down menus when an artifact is submitted or updated. Other field types are simply one-line fields with no restricted set of values. For those fields only the default value can be defined. Field List¶ To configure values assigned to the used fields of your tracker select the item “Manage Field Values” on the welcome page of the Tracker Administration screen or select the “Manage Field Values” item from the Tracker Administration menu bar at the top of the screen. Tracker field list with user definable values Information displayed on this page are as follows: - Field Label: the name of the field. Click on this name to view the list of values for this field. - Description: what this field is about Browsing Tracker Field Values¶ A click on any of the fields listed in the Field Value Management screen (see Field Values Management) brings you to the list of existing values for this field (see List of values for the “Resolution” field). The table of values shows the following information: - Value Label: text label of the value as shown in the select box. Click on this label to modify the value settings (see Updating a Tracker Field Value) - Description: meaning of the value - Rank: defines the order of the field values in the select box. The smaller values appear first at the top of the select box. - Status: - Active: the value is currently visible in the pull-down menus and can be assigned to the corresponding artifact field. - Hidden: the value is currently not visible in the pull-down menu. However if this value was used in the past by any of your project artifacts, it will continue to show up OK for this specific bug. - Permanent: this value has been defined for all trackers using the associated field it cannot be hidden nor deleted. Only the site administrators who have acces to the site tracker templates can mark values as permanent. The List of values for the “Resolution” field shows the list of available values for the Resolution field of tracker managing “Bugs” artifacts. The Resolution field comes with set of predefined values that are available to all Tuleap projects. By default 8 values are active (Accepted, Analyzed, etc.). Of course you are free to add your own values to the Resolution field. However, in order to keep a certain harmony from one Tuleap tracker to another, we highly recommend that you use the list of predefined resolution values as much as you can before creating new ones. List of values for the “Resolution” field Defining a Default Field Value¶ All fields used in a tracker can be assigned a default value. Depending on the field type you will be presented with either a free text field for text, date, integer and float fields or a select box containing all the values already defined for this field for select box and multi-select box fields (not shown on Tracker field list with user definable values) . Creating a Tracker Field Value¶ To add a value use the value creation form located below the list of field values (not shown on Tracker field list with user definable values). Binding a Field to a List of Values¶ Not only does the Tuleap Tracker allow you to create a list of values for a select box but it also offers the ability to associate a select box with a list of predefined values that is actually dynamically generated by the Tuleap system. A typical example of this is when one would like to create a new select box showing the list of project members. Instead of creating and maintaining the list of values manually, Tuleap builds this list for you and allows you to bind it to a field of type select box.. Updating a Tracker Field Value¶ From the screen showing the list of values for a given field (see Tracker field list with user definable values) you can change the properties of a field value by clicking on the value label: - Value: change the value itself. The value typed here will appear as is in the pull-down menu. Keep in mind that if you change a value, the change will also reflect in the artifacts that were using the old value. - Rank: a number that allows you to specify where you want this value to appear in the list of all active values. The values with smaller rank are displayed first. When the “None” value is available for this bug field it has a rank number of 10. This number is deliberately small because by convention “None” always appear at the top of the pull-down menu. Please be a good Tuleap citizen and choose rank numbers higher than 10 for your own values. - Status: Active or Hidden. As explained above going from one to the other in the course of the project life has no negative impact on the artifact database. - Description: says a bit more about the meaning of this value. Setting a field value Tip Whether for Fields or Field Values remember to use large numbers (in the hundreds or the thousands like 100, 200, 300,..) when you create new values. By doing so you’ll make your life easier if you ever want to insert new values in between existing ones in the future and avoid a tedious renumbering of the existing items.”. All values for both fields are displayed. Values which are part of a dependency between the two field are emphasized (in bold). - To create dependencies between Linux and the corresponding versions, you just have to select the source value and check/uncheck corresponding values in the target field. The highlighting helps to link source and target values, with the small arrows indicating the direction of reading, “source to target”. - You can cancel your modifications by clicking on the reset button. Once validated, the modifications are saved. Here are the dependencies for Linux: Linux Dependencies Now you can continue with the next source value MacOS X: MacOS X Dependencies Thus, with the next source value MS Windows: MS Windows Dependencies And, with the last source value NetBSD: NetBSD Dependencies We>You can also “navigate” through dependencies in the opposite direction: to define the source values which influence one target value: Version 2.0 depends upon Linux and NetBSD systems Once dependencies are defined, the final user (when submitting/updating an artifact) will see the Version options filtered according to the selection of the Operating System: Proposed versions for Linux Proposed versions for MacOS X Proposed versions for MS Windows Proposed versions for NetBSD. Also note that defining a new Canned Response can be done on the fly from the artifact update form by clicking on the “define a new Canned Response” link (see Follow-up comments). Canned responses Tracker Report Management¶ Knowing that project administrators, project members and other Tuleap users may have different needs in searching the tracker database, Tuleap offers the ability to define project or user specific tracker reports. Creating a tracker report consists in deciding what fields you want to use as search criteria and what fields you want to see in the columns of the table where the results of the search are presented. You can also define the order in which the search criteria and the columns will appear on the screen. Tracker Administrators have the ability to define tracker reports that will be usable by all registered users who have access to the tracker whereas all other users can only define reports for their personal use. Tip While configuring Tracker reports you will probably notice that the configuration screen allows you to define the fields that you are going to use a search criteria but not the values of this search criteria. This is on purpose. Defining a report template and filling out the search template with content (values) are 2 distinct operations on Tuleap. Once a report template has been defined in the administration module (e.g ‘Simple Report’, ‘QA report’, ‘Daily report’ …) you can go to the tracker searching and browsing module and use the report template for all sorts of queries. Select the report you want from the pull-down menu, fill out the search form with the values you are interested in and click on the browse button. Then you can save the entire query (report plus values) with the Tuleap bookmarking mechanism (see tip in Selection Criteria). And voila! Browsing Tracker Reports¶ Clicking on the “Manage Reports” item in the Tracker Administration menu bar at the top of the page displays the list of available reports (see Example of a list of tracker reports) with the following information: - ID: a number that uniquely identify the report. A click on the report ID brings you to the report configuration screen (see Tracker Report Setting). - Report Name: the report short name as it will appear in the report select box when you’ll be using the artifact browsing screen (e.g. Simple Report, QA report, Monthly Report…). - Description: a longer description of the report. - Scope: - Project: this report will be usable by all project members. Only tracker administrators can define project-wide reports. - Personal: this report will be usable by its creator only. - System: this report is defined at the system level and cannot be removed. The default tracker report that comes pre-configured with each tracker is a system report. - Delete?: click the trash icon to delete the report. Project-wide reports can only be deleted by project administrators. Example of a list of tracker reports The same interface is available to browse the trackers graphical reports. Tracker Report Setting¶ After you click on a report ID in the report list (see Browsing Tracker Reports, the report setting screen appears (see Setting a Tracker Report). This screen allows you to define what fields you’d like to use as search criteria and what fields you’d like to see in the list of artifacts retrieved from the database. Information available on this screen are as follows: Name: each report must be given a name. This name must not be too long as it will appear in a select box in the artifact browsing module when you are asked to choose what tracker report you want to use to query your artifact database. Scope: tracker administrators can define project-wide reports that will be made available to all users. Non tracker administrators can only define personal report. Description: a longer description of the report. Field selection: the field table shows all the fields that are currently in use in your tracker. For each field you can set up the following parameters: - Use as a Search Criteria: If you check this box the field will appear as one of the selection criteria when you search the tracker database. - Rank on Search: A number can be entered in this field. The rank number allows you to place the field with respect to the others. The fields with smaller values will appear first on the list of selection criteria displayed on the screen. These number doesn’t have to be consecutive numbers. - Use as a Report Column: If you check this box the field will appear as one of the column in the search results table. - Rank on Report: A number can be entered in this field. The rank number allows you to place the field with respect to the others. The fields with smaller values will appear first in the search results table (from left to right). These number doesn’t have to be consecutive numbers. Column Width(optional): In case you want to impose a specific width to the column in the report table you can specify a column width in percentage of the total page width. This is optional and our recommendation is to leave it blank unless your Web browser doesn’t make a good job at formatting your table. If you want a column to be as narrow as possible while preserving word boundaries enter a very small percentage like 1 or 2 in the column width field. Note Note: it is perfectly OK to use a field as a search criteria and not as a column in the tracker report and vice versa. For the fields you don’t want to use at all in the report leave both check boxes. Setting a Tracker Report Tracker Graphical Report Setting¶ After you click on a graphical report ID in the graphical report list (see Browsing Tracker Reports), the report setting screen appears. This screen allows you to define what type of graphs will be displayed . There is three graph types supported: Pie, Bar and Gantt. Creating / Editing a graph¶ To create a new graph for the graphical report, juste click on the type of the graph you want to create, Pie, Bar or Gantt. To edit an existing graph, click on the pencil button in the upper right corner of the graph. By clicking on the red cross buton, you will delete the graph. Commun informations available on the creation /edition screen available are as follows: - Title: each graph must be given a name. This name must not be too long as it will appear in the upper center of the graph. - Description: enter a short description of the graph here, it will appear under the title in the graph. - Rank: the rank sets the display order of the varioux graphs in the graphical report. Creating / Editing a Pie graph¶ Specific informations available for the Pie graph are as follows: - Width and Height: set the size of the graph in pixels. - Source Data: set the tracker field on which computation of the Pie graph will be based. Creating / Editing a Pie graph Creating / Editing a Bar graph¶ Specific informations available for the Bar graph are as follows: - Width and Height: set the size of the graph in pixels. - Source Data: set the tracker field on which computation of the Bar graph will be based. - Group by: set a tracker field by which computation of source field will be grouped. Creating / Editing a Bar graph Creating / Editing a Gantt graph¶ Specific informations available for the Gantt graph are as follows: - Start Date: set the tracker field for the start date. - Finish Date: set the tracker field for the finish date. - Due Date: set the tracker field for the due date. - Time Scale: can be day, weak, month and year. - As of date: Date considered as a reference for data display. Default value is today. - Summary: Text to be displayed on the gantt left, and in the bar tooltip. - Progress: Percentage of completion of the task. Must be an integer field display in a Text Field, with values between 0-100. - Informations at the right of the bars: Text to be displayed at the right of the gantt bars. Creating / Editing a Gantt graph As explained earlier in E-mail Notification comma separated email addresses to which submissions of new artifacts (and optionally artifact updates) will be systematically sent. Note that in this case notifications will be sent to users regardless of their personal preferences defined (see section “Event/Role Based Notification Settings” below).. Tip. Tracker Watchers¶ The Tuleap Tracker offers to all project members the ability to be carbon-copied on all email notifications sent to some other project members. Here are a couple of examples where the tracker watch feature can be extremely useful: Backups: when a team member is away from the office it is often convenient to delegate her artifact management activity to another person in the team who is acting as a backup. Becoming the backup of another team member is as easy as inserting her name in the Watchers field of the backup person when the team member quits and remove it when the team member returns. As soon as you specify a person name in the watchers field you’ll start receiving all the artifact notification of this person and you can act accordingly on her behalf. QA Contacts: another possible use is for the QA team members to fill the tracker watcher field with the names of the software engineers whom QA activity they are responsible for. Note Remark: The goal of the tracker watch feature is not to spy on you. To make sure that you are only watched by authorized persons, Tuleap always shows you the list of Tuleap users who are currently watching your email notifications. Event/Role Based Notification Settings¶ This is the most sophisticated part of the customization process. It allows any user to specify what types of events they wants to be notified of by email. Note that these settings are project and user specific so you can tune your own email notification preferences for each tracker you are involved with. The customization matrix (see Configuration of the Personal Notification Matrix) presents you with a series of check boxes. Each check box allows you to specify what kind of events you want to be aware of depending on the role you play with respect to the artifact. There are 4 roles defined with respect to an artifact: - Submitter: you are the person who initially reported the artifact by filling out the artifact submission form. - Assignee: you were assigned the artifact and you are therefore responsible for managing it. - CC: you are mentioned in the list of CC names (see CC List). - Commenter: you have once posted a follow-up comment in the artifact. For each of these roles you can instruct the Tuleap Tracker to send email notifications to you only when some specific events occur. Nine different events (see the right most column on Configuration of the Personal Notification Matrix) are monitored by the Tuleap Tracker. The description of the events is self explanatory and only appeal one comment: the first 8 events in the list can only occur on artifact updates. Only the last event relates to the submission of a new artifact. Let’s review the sample matrix shown on Configuration of the Personal Notification Matrix and see, step by step, how this user has configured her notification settings: First let’s look at the Commenter column. The Commenter column says that this user has decided that if she is involved in an artifact as a Commenter (she just posted a follow-up comment at some point in time) then she is only interested in receiving email notification when the status of the artifact goes to “Closed” or when any of the fields Priority, Status and Severity is modified. All other events will be ignored by the Tuleap tracker and no notification will be sent to this user. Second, looking at the matrix by row, one can see that the user said that when she makes a modification to an artifact herself (Event “I am the author of the change”) she doesn’t want to receive any email notification whatever their role in this artifact is. Please note that the event “I am the author of the change” overlaps with other events. So in our example, the submitter will not get a notification when she adds a new artifact. Even if the event “A new artifact has been submitted” is marked Finally, the user also said that when a new artifact is submitted to the project and is immediately assigned to her (Assignee role), she wants to be notified. However if she is the submitter of the new artifact then she is not interested in receiving the notification. Note that the Commenter role is meaningless for the event “A new artifact has been submitted” because follow-up comments can only be added at update time not at creation time. Also, the event “I’m added or removed from this role” is meaningless for the Submitter and the Commenter because these roles can not be modified in an artefact. Configuration of the Personal Notification Matrix Suspend Email Notification¶ Sometimes, it can be convenient to suspend all email notifications for one specific tracker, for instance during maintenance tasks. By selecting this option, a tracker administrator disables both global notifications and event/role notifications. This feature is typically used when doing mass-changes or for testing purposes. Migrate to tracker v5¶ How to run a migration (for now, it requires to have an admin login on the server) # Run the whole migration codendiadm@tuleap$ time sh /usr/share/tuleap/plugins/tracker/bin/migrate_tv3_to_tv5.sh tuleap_username 105 119 Defects "defect tracker" defect # Parameter 1: project id # Parameter 2: tracker v3 id # Parameter 3: tracker v5 name # Parameter 4: tracker v5 description # Parameter 5: tracker v5 item name (short name / reference) # Just dump the tracker v3 for debug codendiadm@tuleap$ time /usr/share/tuleap/src/utils/php-launcher.sh /usr/share/tuleap/src/utils/TrackerV3-data-exporter.php -d 119 $HOME/archive.zip General¶ Fields might not have history or, worse, can have partial history (changes recorded only for a portion of artifact lifetime). In those cases, a fake changeset is created at the time of export for those values. Attachment¶ - Deleted attachments are not exported. They will not appears in the history either. - If an artifact contains 2 attachments with the same name, export will not be able to distinguish them and it will skip them. Numeric fields¶ Values of Integer (resp. Float) fields are exported as int (resp. int). It sounds obvious but as you may know by now the tracker v5 fields like Integer or Float cannot change their type whereas it was the case in v3. This means that in the history of an Integer (Float) field in v3 we may find values that are plain string instead of int (float) if the field type had been changed from String to Integer (float). The values are then cast into the right type in order to be imported into a tracker v5. Multi selectboxes¶ Statics values¶ We can have some strange cases in database side. It stores: - A string comma separated if we select multiple values - The label if its a unique value - 0 when the field is cleared without selecting any value - ‘Any’ or ‘Tous’ regarding the langage when the value is saved if the old value is a cleared field - We can manage the first case because we are sur that there is only label The two following cases are ambiguous : how to be sure that 0 is the label of the value or the representation of a cleared field ? - Then, if the unique value is an int, how to be sure that this numeric is a label instead of an ID sometimes stored in the database ? - If a label has a comma in its content, we are not able to manage it. - Finally, when the label can be a system word, we don’t know if it’s the label or a magic system word saved in the database. As many follow-up comments as needed can be attached to any given artifact. Follow-up comments are free text fields where virtually any kind of information or comment can be typed in. Follow-up comments have several of interesting capabilities and extensions:. Defining a new Canned Response can be done on the fly from the artifact update form by clicking on the “define a new Canned Response” link. Comment Types: in order to avoid the exponential growth of new artifact fields to store all sorts of free text information, Tuleap offers an interesting mechanism called Comment Types. The project team has the ability to define a list of labels that can be used to characterize the nature of a follow-up comment. This is a very helpful feature to define the nature of the information contained in a follow-up comment and to quickly identify these comments in the long list of follow-up comments. Typical examples of such comment types are: “Workaround” for a comment where you explain how to work around a bug, “Impacted Files” to give the list of source files impacted by the bug resolution (assuming your artifacts are bugs), “Test case” to document how to test the code in the future to make sure that this case will be tested in the future test suite, etc. Comment types are defined in the Tracker Administration module (see Tracker Administration) Cross-References: while typing a follow-up comment, you can use some special text pattern to refer to other artifacts, documents, files, or CVS or Subversion commits. These pattern will be automatically displayed as hyperlinks when the follow-up comment is displayed on the screen. This is an extremely powerful and easy to use mechanism that is covered in more details in Artifact Cross-Referencing. Follow-up comments
https://docs.tuleap.org/user-guide/tracker-v3.html
2019-02-16T03:07:15
CC-MAIN-2019-09
1550247479838.37
[array(['../_images/sc_artifactupdateheader.png', 'Header of artifact update screen (artifact fields)'], dtype=object) array(['../_images/sc_legacytrackercreatenewtracker.png', 'Creation of a new tracker (here a defect tracking system)'], dtype=object) array(['../_images/sc_trackertopadminpage.png', 'Tracker Administration - Top Level Page'], dtype=object) array(['../_images/sc_trackerfieldset.png', 'Field Set screen of a project tracker'], dtype=object) array(['../_images/sc_trackerfieldusage.png', 'Field Usage screen of a project tracker'], dtype=object) array(['../_images/sc_trackerfieldvaluesmgt.png', 'Tracker field list with user definable values'], dtype=object) array(['../_images/sc_trackerfieldvalues.png', 'List of values for the "Resolution" field'], dtype=object) array(['../_images/sc_trackerfieldvalueupdate.png', 'Setting a field value'], dtype=object) array(['../_images/sc_trackerfielddependencies_02.png', 'Linux Dependencies'], dtype=object) array(['../_images/sc_trackerfielddependencies_03.png', 'MacOS X Dependencies'], dtype=object) array(['../_images/sc_trackerfielddependencies_04.png', 'MS Windows Dependencies'], dtype=object) array(['../_images/sc_trackerfielddependencies_05.png', 'NetBSD Dependencies'], dtype=object) array(['../_images/sc_trackerfielddependencies_06.png', 'Version 2.0 depends upon Linux and NetBSD systems'], dtype=object) array(['../_images/sc_trackerfielddependencies_07.png', 'Proposed versions for Linux'], dtype=object) array(['../_images/sc_trackerfielddependencies_08.png', 'Proposed versions for MacOS X'], dtype=object) array(['../_images/sc_trackerfielddependencies_09.png', 'Proposed versions for MS Windows'], dtype=object) array(['../_images/sc_trackerfielddependencies_10.png', 'Proposed versions for NetBSD'], dtype=object) array(['../_images/sc_legacytrackercannedresponses.png', 'Canned responses'], dtype=object) array(['../_images/sc_trackerreportbrowsing.png', 'Example of a list of tracker reports'], dtype=object) array(['../_images/sc_trackerreportsetting.png', 'Setting a Tracker Report'], dtype=object) array(['../_images/sc_trackergraphpie.png', 'Creating / Editing a Pie graph'], dtype=object) array(['../_images/sc_trackergraphbar.png', 'Creating / Editing a Bar graph'], dtype=object) array(['../_images/sc_trackergraphgantt.png', 'Creating / Editing a Gantt graph'], dtype=object) array(['../_images/sc_trackernotificationmatrix.png', 'Configuration of the Personal Notification Matrix'], dtype=object)]
docs.tuleap.org
Overview Purpose¶ This documentation teaches you how to use the Virtual Developer code generators. In order to be able to provide input for the generators and to work with generator output, you have to be familiar with the concepts of the modeling languages and the software architecture that the generation results are based upon. Prerequisites¶ To be able to use the generators, you need the following things: - An account for the Virtual Developer Platform - An installed Virtual Developer Connector (e.g. the Connector for Eclipse) - Optional but useful: An installed Virtual Developer Modeler (Eclipse Plugin). You could also use a basic text editor for the creation of models. But the Virtual Developer Modeler makes your life much easier by providing syntax highlighting, code proposals and the visualization of errors. Other useful Resources¶ There are further resources on the web that give you support for your code generation efforts:
https://docs.virtual-developer.com/technical-documentation/gapp-generators/index.html
2019-02-16T03:49:11
CC-MAIN-2019-09
1550247479838.37
[]
docs.virtual-developer.com
15.14. curses. 15.14.1. Functions. 15.14.2. if the panel is hidden (not visible), false otherwise..
https://docs.python.org/2.7/library/curses.panel.html
2019-02-16T03:28:06
CC-MAIN-2019-09
1550247479838.37
[]
docs.python.org
Loop Parameters (Concurrency) SynopsisThis Operator iterates over its subprocess for all the defined parameter combinations. The parameter combinations can be set by the wizard provided in parameters. Description The Loop Parameters Operator is a nested Operator. It executes the subprocess for all combinations of selected values of the parameters. This can be very useful for plotting or logging purposes and sometimes for simply configuring the parameters for the inner Operators as a sort of meta step. Any results of the subprocess are delivered through the output ports. This Operator can be run in parallel. The entire configuration of this Operator is done through the edit parameter settings parameter. Complete description of this parameter can be found in the parameters section. The inner performance port can be used to log the performance of the inner subprocess. When it is connected, a log is created automatically to capture the number of the run, the parameter settings and the main criterion or all criteria of the delivered performance vector, depending on the parameter log all criteria. Please note that if no results are delivered at the end of a process, the log tables still can be seen in the Results View even if it is not automatically shown. Please note that this Operator has two modes: synchronized and non-synchronized. They depend on the setting of the synchronize parameter. In the latter, all parameter combinations are generated and the subprocess is executed for each combination. In the synchronized mode, no combinations are created but the parameter values are treated as a list of combinations. For the iteration over a single parameter there is no difference between both modes. Please note that the number of parameter possibilities must be the same for all parameters in the synchronized mode. As an Example, having two boolean parameters A and B (both with true(t)/false(f) as possible parameter settings) will produce four combinations in non-synchronized mode (t/t, f/t, t/f, f/f) and two combinations in synchronized mode (t/t, f/f). If the synchronize parameter is not set to true, selecting a large number of parameters and/or large number of steps (or possible values of parameters) results in a huge number of combinations. For example, if you select 3 parameters and 25 steps for each parameter then the total number of combinations would be above 17576 (i.e. 26 x 26 x 26). The subprocess is executed for all possible combinations. Running a subprocess for such a huge number of iterations will take a lot of time. So always carefully limit the parameters and their steps. Differentiation The Optimize Parameters (Grid) Operator executes the subprocess for all combinations of selected values of the parameters and then delivers the optimal parameter values. The Loop Parameters Operator, in contrast to the optimization Operators, simply iterates through all parameter combinations. This might be especially useful for plotting purposes. Optimize Parameters (Grid) Tutorial Processes Iterating through the parameters of the SVM Operator The 'Weighting' data set is loaded using the Retrieve Operator. The Loop Parameters Operator is applied on it. Have a look at the Edit Parameter Settings parameter of the Loop Parameters Operator. You can see in the Selected Parameters window that the C and gamma parameters of the SVM Operator are selected. Click on the SVM.C parameter in the Selected Parameters window, you will see that the range of the C parameter is set from 0.001 to 100000. 11 values are selected (in 10 steps) logarithmically. Now, click on the SVM.gamma parameter in the Selected Parameters window, you will see that the range of the gamma parameter is set from 0.001 to 1.5. 11 values are selected (in 10 steps) logarithmically. There are 11 possible values of two parameters, thus there are 121 ( i.e. 11 x 11) combinations. The subprocess will be executed for all combinations of these values because the synchronize parameter is set to false, thus it will iterate 121 times. In every iteration, the value of the C and/or gamma parameters of the SVM(LibSVM) Operator is changed. The value of the C parameter is 0.001 in the first iteration. The value is increased logarithmically until it reaches 100000 in the last iteration. Similarly, the value of the gamma parameter is 0.001 in the first iteration. The value is increased logarithmically until it reaches 1.5 in the last iteration. Have a look at the subprocess of the Loop Parameters Operator. First the data is split into two equal partitions using the Split Data Operator. The SVM (LibSVM) Operator is applied on one partition. The resultant classification model is applied using an Apply Model Operator on the other partition. The statistical performance of the SVM model on the testing partition is measured using the Performance (Classification) Operator. At the end the Loop Parameters Operator automatically logs the parameter settings and performance. The log contains the following four things: The iteration numbers of the Loop Parameters Operator are counted. This is stored in a column named 'Iteration'. The classification error of the performance of the testing partition is logged in a column named 'classification error'. The value of the C parameter of the SVM (LibSVM) Operator is stored in a column named 'SVM.C'. The value of the gamma parameter of the SVM (LibSVM) Operator is stored in a column named 'SVM.gamma'. Run the process and turn to the Results View. Now have a look at the values logged by the Loop Parameters Operator. Parameters - edit_parameter_settings The parameters are selected through the edit parameter settings menu. You can select the parameters and their possible values through this menu. This menu has an Operators window which lists all the operators in the subprocess of this Operator. When you click on any Operator in the Operators window, all parameters of that Operator are listed in the Parameters window. You can select any parameter through the arrow keys of the menu. The selected parameters are listed in the Selected Parameters window. Only those parameters should be selected for which you want to iterate the subprocess. This Operator iterates through parameter values in the specified range. The range of every selected parameter should be specified. When you click on any selected parameter (parameter in Selected Parameters window), the Grid/Range and Value List option is enabled. These options allow you to specify the range of values of the selected parameters. The Min and Max fields are for specifying the lower and upper bounds of the range respectively. As all values within this range cannot be checked, the steps field allows you to specify the number of values to be checked from the specified range. Finally the scale option allows you to select the pattern of these values. You can also specify the values in form of a list. Range: menu - error_handling This parameter allows you to select the method for handling errors occurring during the execution of the inner process. It has the following options: - fail_on_error: In case an error occurs, the execution of the process will fail with an error message. - ignore_error: In case an error occurs, the error will be ignored and the execution of the process will continue with the next iteration. - log_performance This parameter will only be visible if the inner performance port is connected. If it is connected, the main criterion of the performance vector will be automatically logged with the parameter set if this parameter is set to true. Range: boolean - log_all_criteria This parameter allows for more logging. If set to true, all performance criteria will be logged. Range: boolean - synchronize This Operator has two modes: synchronized and non-synchronized. They depend on the setting of this parameter. If it is set to false, all parameter combinations are generated and the inner Operators are applied for each combination. If it is set to true, no combinations are created but the parameter values are treated as a list of combinations. For the iteration over a single parameter there is no difference between both modes. Please note that the number of parameter possibilities must be the same for all parameters in the synchronized mode. Range: boolean - enable_parallel_execution This parameter enables the parallel execution of the subprocess. Please disable the parallel execution if you run into memory problems. Range: boolean Input input (IOObject) This Operator can have multiple inputs. When one input is connected, another input port becomes available which is ready to accept another input (if any). The order of inputs remains the same. The Object supplied at the first input port of this Operator is available at the first input port of the nested chain (inside the subprocess). Do not forget to connect all inputs in correct order. Make sure that you have connected the right number of ports at the subprocess level. Output output (Collection) Any results of the subprocess are delivered through the output ports. This Operator can have multiple outputs. When one output port is connected, another output port becomes available which is ready to deliver another output (if any). The order of outputs remains the same. The Object delivered at the first output port of the subprocess is delivered at the first outputport of the Operator. Don't forget to connect all outputs in correct order. Make sure that you have connected the right number of ports.
https://docs.rapidminer.com/latest/studio/operators/utility/process_control/loops/loop_parameters.html
2019-02-16T03:19:13
CC-MAIN-2019-09
1550247479838.37
[]
docs.rapidminer.com
You should only use the battery that Research In Motion.
http://docs.blackberry.com/en/smartphone_users/deliverables/55420/als1334683335336.html
2014-12-18T16:43:01
CC-MAIN-2014-52
1418802767274.159
[]
docs.blackberry.com
The following are proposals being considered by the GeoTools community, the pages reflect the proposal in their final form. For discussion each should have an associated Jira item (but we can try to link to email discussion threads as well). The following proposals are Inflight and are being worked on: Completed - Datastore Capabilities API — Datastore capabilities - Color blending and compositing — Compositing and blending Proposals that have been approved and completed will be slotted in as children for the release in which they make their first appearance.
http://docs.codehaus.org/pages/viewpage.action?pageId=147816538
2014-12-18T16:29:27
CC-MAIN-2014-52
1418802767274.159
[]
docs.codehaus.org
Message-ID: <572631622.873.1418920250377.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_872_81042260.1418920250376" ------=_Part_872_81042260.1418920250376 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The build process requires a number of build time parameters tha= t specify the features and components for a Jikes RVM build. Typically the = build parameters are defined within a property file located in the build/configs directory. The f= ollowing table defines the parameters for the build configuration. A typical user will use one of the existing build configurations and thu= s the build system only requires that the user specify the config.name prop= erty. The name should match one of the files located in the build/configs/ = directory minus the ' .properties' extension. There are many possible Jikes RVM configurations. Therefore, we define f= our "logical" configurations that are most suitable for casual or= novice users of the system. The four configurations are: The mapping of logical to actual configurations may vary from release to= release. In particular, it is expected that the choice of garbage collecto= r for these logical configurations may be different as MMTk evolves. Most standard Jikes RVM configuration files loosely follow the following= naming scheme: <boot image compiler> Base"= |" Adaptive" <garbage collector&= gt; where Base"|" Adaptive&qu= ot; denotes whether or not the adaptive system and optimizing comp= iler are included in the build. The following garbage collection suffixes are available: For example, to specify a Jikes RVM configuration: use the name " BaseBaseSemiSpace"= . Some files augment the standard configurations as follows: Full" at the beginnin= g of the configuration name identifies a configuration such that all the Ji= kes RVM classes are included in the boot image. (By default only a small su= bset of these classes are included in the boot image.) FullAdaptive" images have all = of the included classes already compiled by the optimizing compiler. FullBaseAdaptive" images have = the included classes already compiled by the baseline compiler; the adaptiv= e system will later recompile any hot methods. Fast" at the beginnin= g of the configuration name identifies a " Full&qu= ot; configuration where all assertion checking has been turned off= . Note: " Full" and " ExtremeAssertions<= /code>"indicate that the config.assertionsconf= iguration parameter is set to extreme. This turns on a number = of expensive assertions. In configurations that include the adaptive system (denoted by &= quot; Adaptive" in their name), methods are initi= ally compiled by one compiler (by default the baseline compiler) and then o= nline profiling is used to automatically select hot methods for recompilati= on by the optimizing compiler at an appropriate optimization level. For example, to a build for an adaptive configuration, where the optimiz= ing compiler is used to compile the boot image and the semi-space garbage c= ollector is used, use the following command: % ant -Dconfig.name=3DOptAdaptiveSemiSpace=20
http://docs.codehaus.org/exportword?pageId=73261
2014-12-18T16:30:50
CC-MAIN-2014-52
1418802767274.159
[]
docs.codehaus.org
java.lang.Object org.jboss.dna.common.text.TokenStreamorg.jboss.dna.common.text.TokenStream org.jboss.dna.sequencer.ddl.DdlTokenStreamorg.jboss.dna.sequencer.ddl.DdlTokenStream public class DdlTokenStream A TokenStream implementation designed around requirements for tokenizing and parsing DDL statements. Because of the complexity of DDL, it was necessary to extend TokenStream in order to override the basic tokenizer to tokenize the in-line comments prefixed with "--". In addition, because there is not a default ddl command (or statement) terminator, an override method was added to TokenStream to allow re-tokenizing the initial tokens to re-type the tokens, remove tokens, or any other operation to simplify parsing. In this case, both reserved words (or key words) and statement start phrases can be registered prior to the TokenStream 's start() method. Any resulting tokens that match the registered string values will be re-typed to identify them as key words (DdlTokenizer.KEYWORD) or statement start phrases (DdlTokenizer.STATEMENT_KEY). public DdlTokenStream(String content, TokenStream.Tokenizer tokenizer, boolean caseSensitive) content- tokenizer- caseSensitive- public void registerStatementStartPhrase(String[] phrase) Examples would be: {"CREATE", "TABLE"} {"CREATE", "OR", "REPLACE", "VIEW"}see DdlConstantsfor the default SQL 92 representations. phrase- public void registerStatementStartPhrase(String[][] phrases) public void registerKeyWord(String keyWord) keyWord- public void registerKeyWords(List<String> keyWords) Listof key words. keyWords- public void registerKeyWords(String[] keyWords) keyWords- public boolean isNextKeyWord() DdlTokenStream.DdlTokenizerKEYWORD. public boolean isNextStatementStart() public void mark() public String getMarkedContent() public static DdlTokenStream.DdlTokenizer ddlTokenizer(boolean includeComments) DdlTokenStream.DdlTokenizerimplementation that ignores whitespace but includes tokens for individual symbols, the period ('.'), single-quoted strings, double-quoted strings, whitespace-delimited words, and optionally comments. Note that the resulting Tokenizer may not be appropriate in many situations, but is provided merely as a convenience for those situations that happen to be able to use it. includeComments- true if the comments should be retained and be included in the token stream, or false if comments should be stripped and not included in the token stream
http://docs.jboss.org/jbossdna/latest/api/org/jboss/dna/sequencer/ddl/DdlTokenStream.html
2014-12-18T16:47:52
CC-MAIN-2014-52
1418802767274.159
[]
docs.jboss.org
Web frameworks - NanoWeb - RIFE - Simple Advanced Groovy templating, provides a lightweight version of Struts Tiles - Groovy Tapestry - GvTags JSP tag library for Groovy Testing Tools - Canoo WebTest allows specifying Test Steps scripted in Groovy, bundling a series of Tests Steps with the MacroStepBuilder, and creating a whole WebTest using the AntBuilder..
http://docs.codehaus.org/pages/viewpage.action?pageId=39160
2014-12-18T16:34:45
CC-MAIN-2014-52
1418802767274.159
[]
docs.codehaus.org
A sliced job refers to the concept of a distributed job. Distributed jobs are used for running a job across a very large number of hosts, allowing you to run multiple ansible-playbooks, each on a subset of an inventory, that can be scheduled in parallel across a cluster. By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration, job slicing takes advantage of automation controller’s ability to distribute work to multiple nodes in a cluster. Job slicing works by adding a Job Template field job_slice_count, which specifies the number of jobs into which to slice the Ansible run. When this number is greater than 1, automation controller will generate a workflow from a job template instead of a job. The inventory will be distributed evenly amongst the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow. When launching a job, the API will return either a job resource (if job_slice_count = 1) or a workflow job resource. The corresponding User Interface will redirect to the appropriate screen to display the status of the run. Consider the following when setting up job slices: A sliced job creates a workflow job, and then that creates jobs. A job slice consists of a job template, an inventory, and a slice count. When executed, a sliced job splits each inventory into a number of “slice size” chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a pared-down version of the original inventory that only contains the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run: These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping). Sliced job templates with prompts and/or extra variables behave the same as standard job templates, applying all variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit to a Sliced Job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job to fail. A job slice job status of a distributed job is calculated in the same manner as workflow jobs; failure if there are any unhandled failures in its sub-jobs. Warning Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts) should not be configured as a slice job. Any job that does, may fail, and automation controller will not attempt to discover or account for playbooks that fail when run as slice jobs. When jobs are sliced, they can run on any node and some may not run at the same time (insufficient capacity in the system, for example). When slice jobs are running, job details display the workflow and job slice(s) currently running, as well as a link to view their details individually. By default, job templates are not normally configured to execute simultaneously ( allow_simultaneous must be checked in the API or Enable Concurrent Jobs in the UI). Slicing overrides this behavior and implies allow_simultaneous even if that setting is unchecked. See Job Templates for information on how to specify this, as well as the number of job slices on your job template configuration. The Job Templates section provides additional detail on performing the following operations in the User Interface: Launch workflow jobs with a job template that has a slice number greater than one Cancel the whole workflow or individual jobs after launching a slice job template Relaunch the whole workflow or individual jobs after slice jobs finish running View the details about the workflow and slice jobs after a launching a job template Search slice jobs specifically after you create them (see subsequent section, Search job slices) To make it easier to find slice jobs, use the Search functionality to apply a search filter to: job lists to show only slice jobs job lists to show only parent workflow jobs of job slices job templates lists to only show job templates that produce slice jobs To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or unified_jobs: /api/v2/jobs/?job_slice_count__gt=1 To show only parent workflow jobs of job slices: /api/v2/workflow_jobs/?job_template__isnull=false To show only job templates that produce slice jobs: /api/v2/job_templates/?job_slice_count__gt=1
https://docs.ansible.com/automation-controller/latest/html/userguide/job_slices.html
2021-09-17T03:08:09
CC-MAIN-2021-39
1631780054023.35
[]
docs.ansible.com
AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. With iLert's AWS Personal Health Dashboard integration, you can automatically create incidents in iLert from problems in AWS Personal Health Dashboard. That way, you will never miss a critical alert and always alert the right person using iLert's on-call schedules, automatic escalation, and multiple alerting channels. When AWS Personal Health Dashboard reports an issue,. 1. Go to the "Alert sources" tab and click "Create new alert source" 2. Enter a name and select your desired escalation policy. Select "AWS Personal Health Dashboard" as the Integration Type and click on Save. 3. On the next page, a Webhook URL is generated. You will need this URL below when setting up the SNS topic subscription in AWS. If you have already created an SNS topic for your AWS Personal Health Dashboard that you want to reuse, you can proceed to step 3. 1. In the SNS Dashboard click on Create topic 2. Name the topic Personal Health Dashboard rule to the topic you have created. The following section describes how to create a rule and make the link. 1. In AWS, click on Alerts icon and select View all alerts 2. In the AWS Personal Health Dashboard click on Dashboard and then click on Set up notifications with CloudWatch Events to add a rule 3. In the Event Source section, choose the Event Pattern 4. In the Service Name section, choose the Health service 5. In the Event Type section, choose the Specific Health events service 6. In the next section, choose Any service to receive an health update for each AWS service or choose Specific service(s) and select a service that interests you 7. In the Targets section, choose the SNS Topic and select the SNS topic that you generated before 8. Click on the Configure details button 9. On the next page in the Name section, enter a name for the rule 10. Click on the Create rule button Will incidents in iLert be resolved automatically? Yes, as soon as the Personal Health Issue is solved in AWS, the incident in iLert will be closed. Can I link AWS Personal Health Dashboard to multiple alert sources in iLert? Yes, create an SNS topic subscription in AWS for each alert source.
https://docs.ilert.com/integrations/aws-phd/
2021-09-17T04:08:10
CC-MAIN-2021-39
1631780054023.35
[]
docs.ilert.com
Rules tab of the Roles wizard provides building blocks used to build a table of device detection rules. For example, you can combine detection rules to build an expression that excludes and/or checks for specific management objects (SNMP, WMI), system characteristics, or host attribute values. Tip: As a practical guide for choosing and combining device detection rules, you can browse default rule sets (rules shipped within WhatsUp Gold). You can find more detailed rule sets applied in default sub roles. Important: Creating many role definitions with large rule set expressions can have an impact and slow discovery and refresh scan performance. SNMP object '1.3.6.1.2.1.33.1.1.1' (upsIdentManufacturer) contains 'MGE' SNMP object '1.3.6.1.4.1.789.1.25.2.1.6' (nodeAssetTag) contains '02040'
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/43650.htm
2021-09-17T03:40:24
CC-MAIN-2021-39
1631780054023.35
[]
docs.ipswitch.com
Join our M3 Community 375+ active group members 78 contributors across 20+ companies Ways to contribute Welcome to the M3 community! There are a variety of resources to share the latest news, answer questions, and troubleshoot any issues. See the below list for more information on these resources. Slack Engage with other members of the community through the M3 slack channel. Github Post and review any feedback, code changes, and issues in our M3 GitHub repository. Office Hours Schedule time with M3 engineering to answer any questions or concerns. We hold office hours the third Thursday of every month. Media M3: Uber’s Open Source, Large-scale Metrics Platform for Prometheus Learn how and why M3 was created and used by Uber Maximizing M3 – Pushing performance boundaries in a distributed metrics engine at global scale CNCF Member Webinar - August 2020 Smooth Operator: Large Scale Automated Storage with Kubernetes Keynote from KubeCon Seattle 2018
https://docs.m3db.io/community/
2021-09-17T03:36:13
CC-MAIN-2021-39
1631780054023.35
[array(['../images/community/media/media-1-cover.png', 'media-cover-1'], dtype=object) array(['../images/community/media/media-2-cover.png', 'media-cover-2'], dtype=object) array(['../images/community/media/media-3-cover.png', 'media-cover-3'], dtype=object) ]
docs.m3db.io
2.1.7 [HTML] Section 8.1.3, Interpretation of language codes V0007: The specification states:". Quirks Mode and IE7 Mode (All Versions) The pseudo-class :lang is not supported. IE8 Mode and IE9 Mode (All Versions) The pseudo-class :lang is applied in source order. More exact matches are not favored over less exact ones.
https://docs.microsoft.com/en-us/openspecs/ie_standards/ms-html401/a5976a96-8bea-4d4a-a1f8-9158c0a76f84
2021-09-17T04:38:40
CC-MAIN-2021-39
1631780054023.35
[]
docs.microsoft.com
Split Open Theme Animation. Opened Target Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. public: property DependencyObject ^ OpenedTarget { DependencyObject ^ get(); void set(DependencyObject ^ value); }; DependencyObject OpenedTarget(); void OpenedTarget(DependencyObject value); public DependencyObject OpenedTarget { get; set; } Public Property OpenedTarget As DependencyObject Property Value The UI element that will be clipped. Remarks Don't set this in XAML. For a XAML declaration, use OpenedTargetName instead.
https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.media.animation.splitopenthemeanimation.openedtarget?view=winui-3.0
2021-09-17T05:18:36
CC-MAIN-2021-39
1631780054023.35
[]
docs.microsoft.com
Managing creatives #Editing creatives Campaign Overview page, select the campaign that contains the creative you want to edit. Go to the Referral Creatives tab. Identify the creative that you want to edit. Under the Actions column, click the More Actions, then click Edit. The creative wizard appears. Make any edits needed. Click Save an app installs campaign downloading a music playing app.. #Pausing live creatives On the Campaign Overview page, click on the campaign that the creative belongs to. On the Campaign Details page, find the creative you want to pause in the Referral Creatives Tab. Under Actions, click Pause. The creative is immediately paused across all audiences linked to the creative. Note that pausing the creative is not the same thing as unlinking a creative from an audience. The paused creative will still be linked to all audiences it was linked to before being paused. #Resuming paused creatives On the Campaign List page, select the campaign that the creative belongs to. On the Campaign Details page, find the creative you want to resume in the Referral Creatives Tab. Under Actions, click Resume. The creative is immediately reactivated across all audiences linked to the creative. Note: Pausing a creative is not the same thing as unlinking a creative from an audience. The reactivated creative maintains its links to any existing creatives.
https://docs.rokt.com/docs/user-guides/rokt-ecommerce/campaigns/creatives/managing
2021-09-17T04:10:56
CC-MAIN-2021-39
1631780054023.35
[]
docs.rokt.com
Nimble Storage Rest API OverviewOverview. Plugin-Pack assetsPlugin-Pack assets Monitored ObjectsMonitored Objects - Nimble Flash Arrays (NimbleOS >=2.3.x) Available servicesAvailable services The current version of the Nimble SNMP Plugin-Pack can monitor the following services: - Arrays - Hardware - Volumes Collected metricsCollected metrics The following metrics are collected by the Centreon Nimble Rest API Plugin: PrerequisitesPrerequisites Nimble Rest API configurationNimble Rest API configuration Make sure you can reach Nimble device over its API. Read Prerequistes of the official HPE documentation: InstallationInstallation - Install the Centreon Plugin package on every Centreon poller expected to monitor Nimble Flash Arrays: yum install centreon-pack-hardware-storage-nimble-restapi.noarch - On the centreon Web interface, install the Nimble Storage Rest API Centreon Plugin-Pack on the "Configuration > Plugin Packs > Manager" page - Install the Centreon Plugin package on every Centreon poller expected to monitor Nimble Flash Arrays: yum install centreon-plugin-Hardware-Storage-Nimble-Restapi.noarch - Install the Centreon Plugin-Pack RPM on the Centreon Central server: yum install centreon-pack-hardware-storage-nimble-restapi.noarch - On the centreon Web interface, install the Nimble Storage Rest API Centreon Plugin-Pack on the "Configuration > Plugin Packs > Manager" page ConfigurationConfiguration - Log into Centreon and add new host through "Configuration > Hosts". - Apply the template HW-Storage-Nimble-Restapi to the Host and configure all the mandatories Macros: FAQFAQ Why do I get the following error message:Why do I get the following error message: UNKNOWN: 500 Can't connect to myserver.mycompany.com:19999 This error message means that the Centreon Plugin couldn't successfully connect to the Nimble device API. Check that no third party device (such as a firewall) is blocking the request. A proxy connection may also be necessary to connect to the API. This can be done by using this option in the command: --proxyurl=''. UNKNOWN: 501 Protocol scheme 'connect' is not supported | When using a proxy to connect to the Nimble API,
https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/hardware-storage-nimble-restapi.html
2021-09-17T04:51:48
CC-MAIN-2021-39
1631780054023.35
[]
docs.centreon.com
When you run a Spark job, you will see a standard set of console messages. In addition, the following information is available: A list of running applications, where you can retrieve the application ID and check the application log: yarn application –list yarn logs -applicationId <app_id> Check the Spark environment for a specific job: http://<host>:8088/proxy/<job_id>/environment/ Specific Issues The following paragraphs describe specific issues and possible solutions: Issue: Job stays in "accepted" state; it doesn't run. This can happen when a job requests more memory or cores than available. Solution: Assess workload to see if any resources can be released. You might need to stop unresponsive jobs to make room for the job. Issue: Insufficient HDFS access. This can lead to errors such as the following: “Loading data to table default.testtable Failed with exception Unable to move sourcehdfs://blue1:8020/tmp/hive-spark/hive_2015-03-04_ 12-45-42_404_3643812080461575333-1/-ext-10000/kv1.txt to destination hdfs://blue1:8020/apps/hive/warehouse/testtable/kv1.txt” Solution: Make sure the user or group running the job has sufficient HDFS privileges to the location. Issue: Wrong host in Beeline, shows error as invalid URL: Error: Invalid URL: jdbc:hive2://localhost:10001 (state=08S01,code=0) Solution: Specify the correct Beeline host assignment. Issue: Error: closed SQLContext. Solution: Restart the Thrift server.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.2.8/bk_spark-quickstart/content/ch_troubleshooting-spark-quickstart.html
2021-09-17T04:31:50
CC-MAIN-2021-39
1631780054023.35
[]
docs.cloudera.com
Date: Tue, 19 Jan 2010 12:04:47 -0600 From: "Doug Poland" <[email protected]> To: "krad" <[email protected]> Cc: [email protected] Subject: Re: Trouble getting a core dump from clamd Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Tue, January 19, 2010 11:10, krad wrote: > 2010/1/19 Doug Poland <[email protected]> > >> Hello, >> >> I'm running 7.2-RELEASE-p3 i386 and am having an issue getting a >> core dump from a program that is seg faulting. >> > > set a path in the sysctl variable kern.corefile. WIll make the core > file easier to find if one is generated. Generally much tider as well > then having core files littered all over the system > No joy. # sysctl kern.corefile=/var/crash/clamd.core # /usr/local/etc/rc.d/clamav-clamd start Starting clamav_clamd. Segmentation fault # ll /var/crash total 2 -rw-r--r-- 1 root wheel 5 Apr 10 2005 minfree -- Regards, Doug Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=464835+0+archive/2010/freebsd-questions/20100124.freebsd-questions
2021-09-17T05:20:29
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
Metagenomic Read Mapping¶ The Metagenomic Read Mapping Service uses KMA(k-mer alignment), [1] to align reads against antibiotic resistance genes or virulence factors. KMA maps raw reads directly against these databases, and uses k-mer seeding to speed up mapping and the Needleman-Wunsch algorithm to accurately align extensions from k-mer seeds. Software for KMA was downloaded at the following location: I. Locating the Metagenomic Read Mapping Service App¶ At the top of any PATRIC page, find the Services tab. Click on Metagenomic Read Mapping. This will open up the Metagenomic Read Mapping. Selecting parameters and submitting the Metagenomic Read Mapping job¶ Parameters must be selected prior to job submission. Click on the down arrow at the end of the text box under Predefined Gene Set Name to see the possible selections. The Metagenomic Mapping Read service has two gene sets to choose from. The Comprehensive Antibiotic Resistance Database (CARD)[2] is the current gold standard for antimicrobial resistance genes. It is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR) that emphasizes genes, proteins and mutations that are involved in AMR. The Virulence Factor Database (VFDB)[3] is the current gold standard reference source for virulence factors (VFs), providing up-to-date knowledge of VFs from various bacterial pathogens. Select either CARD or VFDB as the gene set. A folder must be selected for the Metagenomic Read Mapping_23<<. VI. Viewing the Metagenomic Read Mapping. _29<< MetagenomicReadMapping.html. This will populate the vertical green bar with a number of icons. Clicking the information icon (i) will open a new tab that has the Metagenomic Read Mapping tutorial. There are icons for downloading the data, viewing it, deleting the file, renaming the file, copying or sharing with another PATRIC user, moving it to a different director, or changing the type tagged to the file. To examine the MetagenomicReadMappingReport.html, click on the View icon. This page shows KMA’s standard sample report format. The fields of the output, from left-to-right, are as follows: Template: Identifier of the template (reference gene) sequence that match the query reads Function: Template gene function Genome: Genome that contains template gene-value corresponding to the obtained q_value Clicking on any of the template identifiers in the first column will open a Specialty Gene List View that shows all the genes in PATRIC that have BLAT[4] hits to the same template gene. Clicking on the name in the Genome column will open a new tab that shows the Genome List view, which shows all the genomes in PATRIC that fall under the same taxonomy of the selected name. To see an alignment details, click on the kma.aln and then on the View icon. This will open a text file that shows the alignment between the template and the submitted query sequence. The kma.frag.gz file should be downloaded. It has mapping information on each mapped read, and the columns found in the download are as follows: read, number of equally well mapping templates, mapping score, start position, end position (w.r.t. template), the chosen template. The kma.fsa can be viewed in the workspace. Select the row and click on the View icon. The kma.fsa file shows the consensus sequence drawn from the alignment. The kma.res file can be downloaded, or viewed in the workspace. Click on the row and click on the View icon. This is a text file that matches the MetagenomicReadMapping.html References¶ Clausen, P.T., F.M. Aarestrup, and O. Lund, Rapid and precise alignment of raw reads against redundant databases with KMA. BMC bioinformatics, 2018. 19(1): p. 307. Jia, B., et al., CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database. Nucleic acids research, 2016: p. gkw1004. Chen, L., et al., VFDB 2016: hierarchical and refined dataset for big data analysis—10 years on. Nucleic acids research, 2015. 44(D1): p. D694-D697. Kent, W.J., BLAT—the BLAST-like alignment tool. Genome research, 2002. 12(4): p. 656-664.
https://docs.patricbrc.org/tutorial/metagenomic_read_mapping/metagenomic_read_mapping.html
2021-09-17T03:48:46
CC-MAIN-2021-39
1631780054023.35
[array(['../../_images/Picture110.png', '../../_images/Picture110.png'], dtype=object) array(['../../_images/Picture210.png', '../../_images/Picture210.png'], dtype=object) array(['../../_images/Picture310.png', '../../_images/Picture310.png'], dtype=object) array(['../../_images/Picture44.png', '../../_images/Picture44.png'], dtype=object) array(['../../_images/Picture59.png', '../../_images/Picture59.png'], dtype=object) array(['../../_images/Picture61.png', '../../_images/Picture61.png'], dtype=object) array(['../../_images/Picture71.png', '../../_images/Picture71.png'], dtype=object) array(['../../_images/Picture81.png', '../../_images/Picture81.png'], dtype=object) array(['../../_images/Picture91.png', '../../_images/Picture91.png'], dtype=object) array(['../../_images/Picture101.png', '../../_images/Picture101.png'], dtype=object) array(['../../_images/Picture111.png', '../../_images/Picture111.png'], dtype=object) array(['../../_images/Picture121.png', '../../_images/Picture121.png'], dtype=object) array(['../../_images/Picture131.png', '../../_images/Picture131.png'], dtype=object) array(['../../_images/Picture141.png', '../../_images/Picture141.png'], dtype=object) array(['../../_images/Picture151.png', '../../_images/Picture151.png'], dtype=object) array(['../../_images/Picture161.png', '../../_images/Picture161.png'], dtype=object) array(['../../_images/Picture171.png', '../../_images/Picture171.png'], dtype=object) array(['../../_images/Picture181.png', '../../_images/Picture181.png'], dtype=object) array(['../../_images/Picture191.png', '../../_images/Picture191.png'], dtype=object) array(['../../_images/Picture201.png', '../../_images/Picture201.png'], dtype=object) array(['../../_images/Picture211.png', '../../_images/Picture211.png'], dtype=object) array(['../../_images/Picture221.png', '../../_images/Picture221.png'], dtype=object) array(['../../_images/Picture231.png', '../../_images/Picture231.png'], dtype=object) array(['../../_images/Picture241.png', '../../_images/Picture241.png'], dtype=object) array(['../../_images/Picture251.png', '../../_images/Picture251.png'], dtype=object) array(['../../_images/Picture261.png', '../../_images/Picture261.png'], dtype=object) array(['../../_images/Picture271.png', '../../_images/Picture271.png'], dtype=object) array(['../../_images/Picture281.png', '../../_images/Picture281.png'], dtype=object) array(['../../_images/Picture291.png', '../../_images/Picture291.png'], dtype=object) array(['../../_images/Picture301.png', '../../_images/Picture301.png'], dtype=object) array(['../../_images/Picture311.png', '../../_images/Picture311.png'], dtype=object) array(['../../_images/Picture321.png', '../../_images/Picture321.png'], dtype=object) array(['../../_images/Picture331.png', '../../_images/Picture331.png'], dtype=object) array(['../../_images/Picture341.png', '../../_images/Picture341.png'], dtype=object) array(['../../_images/Picture351.png', '../../_images/Picture351.png'], dtype=object) array(['../../_images/Picture361.png', '../../_images/Picture361.png'], dtype=object) array(['../../_images/Picture371.png', '../../_images/Picture371.png'], dtype=object) array(['../../_images/Picture381.png', '../../_images/Picture381.png'], dtype=object) array(['../../_images/Picture391.png', '../../_images/Picture391.png'], dtype=object) array(['../../_images/Picture401.png', '../../_images/Picture401.png'], dtype=object) array(['../../_images/Picture411.png', '../../_images/Picture411.png'], dtype=object) array(['../../_images/Picture421.png', '../../_images/Picture421.png'], dtype=object) array(['../../_images/Picture431.png', '../../_images/Picture431.png'], dtype=object) ]
docs.patricbrc.org
Introducer Configuration¶ The introducer feature lets a device automatically add new devices. When two devices connect they exchange a list of mutually shared folders and the devices connected to those shares. In the following example: Local device L sets remote device R as an introducer. They share the folder “Pictures.” Device R is also sharing the folder with A and B, but L only shares with R. Once L and R connect, L will add A and B automatically, as if R “introduced” A and B to L. Remote device R also shares “Videos” with device C, but not with our local L. Device C will not be added to L as it is not connected to any folders that L and R share. The introduction process involves the autoconfiguration of device IDs, labels and configured address settings, but no other device-specific settings. For each offered device autoconfiguration is only applied once and is done so when a device connects to an introducer; a restart, after configuring a remote device to introduce, will force this. Once autoconfigured, device-specific settings will currently not receive any updates from an introducer. If an introducer adds or removes any devices or shares, or changes device-share settings, however, this change will be reflected to devices upon their next connection. Similarly, if an introduced device is no longer present on an introducer, or no longer shares any mutual folders with the device, it will be automatically removed when devices in the cluster next connect to the introducer. Note that devices which are introduced cannot be removed so long as the introducer device is still marked as such, and if they are unshared from a folder they will be re-added. Introducer status is transferable; that is, an introducers’ introducer will become your introducer as well. It is not a good idea to set two devices as introducers to each other. While this will work for adding devices, removing a device may present a problem, as the two devices will be constantly “re-introducing” the removed device to each other.
https://docs.syncthing.net/branch/untrusted/html/users/introducer.html
2021-09-17T04:24:02
CC-MAIN-2021-39
1631780054023.35
[]
docs.syncthing.net
SLA definition Availability UKCloud will use reasonable endeavours to ensure that the availability of the UKCloud service purchased by the customer (the service) in a given calendar month equals the applicable availability commitment. To define availability, UKCloud monitors a number of service elements — some generic, some service specific — which collectively enable the customer to use or access the service. If the availability of the service is less than the associated Availability Commitment, the customer may request Service Credits for the service within 30 calendar days of the service being deemed unavailable. Unavailability and service level agreement events Subject to the service level agreement (SLA) limitations detailed below, the service will be considered unavailable (and an SLA event will be deemed as having taken place) if UKCloud's monitoring detects that the service or component has failed for five consecutive minutes. The total number of minutes that the service is unavailable is measured from the time that UKCloud confirms the SLA event has occurred until the time that UKCloud resolves the issue and the service becomes available to the customer. If two or more SLA events occur simultaneously, the SLA event with the longest duration will be used to determine the total number of minutes for which the service was unavailable. Service Credits If the availability of the service for a particular month falls below the availability commitment specified in the applicable SLA (subject to the SLA limitations provided below), customers will be eligible to request Service Credits. Service Credits will be calculated as a percentage of the fees billed for the monthly period during which the SLA event occurred (to be applied at the end of the billing cycle, or of the subsequent cycle if a claim is made after an invoice has been paid). Note You will not be eligible to receive a Service Credit if your account has any undisputed payments outstanding beyond their due date or you are in violation of UKCloud's Terms and Conditions including the UKCloud System Interconnect Security Policy (SISP). Service level agreement limitations The following will be excluded from any time-based calculations related to the service being unavailable: Service dependent — Planned Maintenance windows (as specified in the applicable Service Definition) Emergency Maintenance (as specified in the applicable Service Definition) Your misuse of a particular service (as defined in the UKCloud Terms and Conditions and the UKCloud System Interconnect Security Policy (SISP)) A force majeure event Denial of service attacks, virus or hacking attacks for which there is no commercially reasonable known solution; or any other events that are not within the direct control of UKCloud or that could not have been avoided with commercially reasonable care Packet loss, network or connectivity (for example, internet, MPLS) problems beyond UKCloud's management boundary (for example, border router) supporting our connectivity to the public internet or government secure networks Any customer-defined or customer-controlled event (for example, unavailability of service resulting from inadequate customer-subscribed services, resources or configuration) The customer will not be eligible to receive a Service Credit if the service account has any undisputed payments outstanding beyond their due date, or you are in violation of UKCloud's Terms and Conditions including the UKCloud System Interconnect Security Policy (SISP). Service level agreement claims To request a Service Credit, the customer must file a support ticket within thirty (30) calendar days of the relevant suspected SLA event. UKCloud will review the request and issue a Service Credit if applicable. Service Credits will be issued only to the customer that UKCloud invoices for the applicable instance of the service as a separate credit note that can be applied towards a future invoice for that service only. If the customer's contract term for the service expires or is terminated prior to a Service Credit being issued, the Service Credit will become void as of the date of the expiration or termination. UKCloud service level agreement monitoring Generic service components UKCloud monitors various elements across the service to ensure that availability can be measured appropriately and realistically. Some elements are generic across all services others are service specific. UKCloud Portal UKCloud monitors the customer self-service UKCloud Portal, ( (or Elevated equivalent)). This includes the UKCloud for VMware API, which provides programmatic control of the UKCloud for VMware service. UKCloud for Microsoft Azure UKCloud for OpenStack UKCloud for Oracle Software Important This service is no longer available for sale. The following information is provided to support existing customers of the service only. UKCloud for Red Hat OpenShift UKCloud for VMware Cloud Storage Cross Domain Security Zone Dedicated Compute v2 Disaster Recovery as a Service (DRaaS) Due to the service being dependent on connectivity between the customer data centre and UKCloud, we are unable to offer an SLA relating to the performance of this service. For full details of the SLA for the UKCloud for VMware platform that hosts the solution, see UKCloud for VMware. Important This service is no longer available for sale. The following information is provided to support existing customers of the service only. Extended Network Support from UKCloud SLA varies based on the chosen cloud technology. See the appropriate section of this article for more information. Managed Monitoring as a Service Service levels for Managed Monitoring as a Service are split. Migration to the Cloud Due to the service being dependent on connectivity between the customer data centre and UKCloud, we are unable to offer an SLA relating to the performance of this service. Neustar DDoS Protection from UKCloud Neustar UltraDNS from UKCloud Private Cloud Private Cloud for Oracle Software Private Cloud for Storage Secure Remote Access Security Operations Service SLA varies based on the chosen cloud technology. See the appropriate section of this article for more information. VMware Licence Support There is no SLA for this service. For information about customer service targets for support response, see How to raise and escalate support tickets with customer support). Feedback If you find a problem with this article, click Improve this Doc to make the change yourself or raise an issue in GitHub. If you have an idea for how we could improve any of our services, send an email to [email protected].
https://docs.ukcloud.com/articles/other/other-ref-sla-definition.html
2021-09-17T04:34:26
CC-MAIN-2021-39
1631780054023.35
[]
docs.ukcloud.com
Affiliated Packages¶ As a community project, our goal is to provide a base package that gives users something useful out of the box, but enables them to improve upon what we’ve delivered as needed. As such, we’re hope that the community will take their creations and feed them back into the TOM ecosystem. We’ve set up this page to keep track of any community-developed tools and interesting ideas–if you happen to have a project with any custom code that the community would find useful, please let us know and we can add it here! tom_iag¶ The tom_iag plugin provides and interface to the MONET IAG telescopes, and was developed by Tim-Oliver Husser. tom_gemini_community¶ The tom_gemini_community plugin provides additional features for making observation requests to Gemini, and includes support for guide star selection via tom_gemini_community. The plugin was developed by Bryan Miller. SNEx Custom Code¶ The updated SNEx codebase contains the following custom elements:
https://tom-toolkit.readthedocs.io/en/stable/api/affiliated.html
2021-09-17T03:43:45
CC-MAIN-2021-39
1631780054023.35
[]
tom-toolkit.readthedocs.io
Receiving Data in Transit¶ Data in transit, also known as data in flight refers to data that is in the process of being moved and therefore not permanently stored in a location where they can be in a static state. Streaming data, messages in a queue or a topic in a messaging system, and requests sent to a HTTP listening port are a few examples. The sources from which data in transit/flight are received can be classified into two categories as follows: Data publishers: You can receive data from these sources without subscribing to receive it. (e.g., HTTP, HTTPS, TCP, email, etc.) Messaging Systems: You need to subscribe to receive data from the source. (e.g., messaging systems such as Kafka, JMS, MQTT, etc.) Receiving data from data publishers¶ Data publishers are transports from which WSO2 SI can receive messages without subscribing for them. In a typical scenario, you are required to open a port in the WSO2 Streaming Integrator that is dedicated to listen to messages from the data publisher. To receive data from a data publisher, define an input stream and connect a [source] annotation of a type that receives data from a data publisher as shown in the example below. In this example, an online student registration results in an HTTP request in JSON format being sent to the endpoint namedIn this example, an online student registration results in an HTTP request in JSON format being sent to the endpoint named @source(type='http', receiver.url='', @map(type = 'json')) define stream StudentRegistrationStream (name string, course string); StudentRegistrationEPto the 5005port of the localhost. The source generates an event in the StudentRegistrationStreamstream for each of these requests. Try it out¶ To try out the example given above, let's include the source configuration in a Siddhi application and simulate an event to it. Open and access Streaming Integrator Tooling. For instructions, see Streaming Integrator Tooling Overview - Starting Streaming Integrator Tooling. Open a new file and add the following Siddhi application to it. @App:name('StudentRegistrationApp') @source(type = 'http', receiver.url = "", @map(type = 'json')) define stream StudentRegistrationStream (name string, course string); @sink(type = 'log', prefix = "New Student", @map(type = 'passThrough')) define stream StudentLogStream (name string, course string, total long); @info(name = 'TotalStudentsQuery') from StudentRegistrationStream select name, course, count() as total insert into StudentLogStream; Save the Siddhi application. This Siddhi application contains the httpsource of the previously used example. The TotalStudentsQueryquery selects all the student registrations captured as HTTP requests and directs them to the StudentLogStreamoutput stream. A log sink connected to this output stream logs these registrations in the terminal. Before logging the events the Siddhi application also counts the number of registrations via the count()function. This count is presented as totalin the logs. Start the Siddhi application by clicking on the play icon in the top panel. To simulate an event, issue the following two CURL commands. curl -X POST \ \ -H 'content-type: application/json' \ -d '{ "event": { "name": "John Doe", "course": "Graphic Design" } }' curl -X POST \ \ -H 'content-type: application/json' \ -d '{ "event": { "name": "Michelle Cole", "course": "Graphic Design" } }' The following is logged in the terminal. ```text INFO {io.siddhi.core.stream.output.sink.LogSink} - New Student : Event{timestamp=1603185021250, data=[John Doe, Graphic Design, 1], isExpired=false} INFO {io.siddhi.core.stream.output.sink.LogSink} - New Student : Event{timestamp=1603185486763, data=[Michelle Cole, Graphic Design, 2], isExpired=false} ``` Supported transports¶ The following are the supported transports to capture data in transit from data publishers. Supported mappers¶ Mappers determine the format in which the event is received. For information about transforming events by changing the format in which the data is received/published, see Transforming Data. The following are the supported mappers when you receive data from data publishers. Receiving data from messaging systems¶ This section explains how to receive input data from messaging systems where WSO2 Streaming Integrator needs to subscribe to specific queues/topics in order to receive the required data. To receive data from a messaging system, define an input stream and connect a [source] annotation of a type that receives data from a messaging system. For example, consider a weather broadcasting application that publishes the temperature and humidity for each region is monitors in a separate Kafka topic. The local weather broadcasting firm of Houston wants to subscribe to receive weather broadcasts for Houston. @source(type='kafka', topic.list='houston', threading.option='single.thread', group.id="group1", bootstrap.servers='localhost:9092', @map(type='json')) define stream TemperatureHumidityStream (temperature int, humidity int); The above Kafka source listens at bootstrap server localhost:9092 for messages in the kafka topic named houston sent in JSON format. For each message, it generates an event in the TemperatureHumidityStream stream. Try it out¶ To try the above example, houston, issue the following command from the same directory. bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic houston Prepare WSO2 Streaming Integrator Tooling to consume Kafka messages as follows: Start and access WSO2 Streaming Integrator Tooling. Download and install the Kafka extension to it. For instructions, see Installing Siddhi Extensions. Open a new file and add the following Siddhi application to it. @App:name('TemperatureReportingApp') @source(type = 'kafka', topic.list = "houston", threading.option = "single.thread", group.id = "group1", bootstrap.servers = "localhost:9092", @map(type = 'json')) define stream TemperatureHumidityStream (temperature int, humidity int); @sink(type = 'log', prefix = "Temperature Update", @map(type = 'passThrough')) define stream OutputStream (temperature int, humidity int); @info(name = 'query1') from TemperatureHumidityStream select * insert into OutputStream; This Siddhi application includes the Kafka source that subscribes to the houstonkafka source and generates an event in the TemperatureHumidityStreamstream for each message in the topic (as described in the example inj the previous section). query1query gets all these messages from the TemperatureHumidityStreamstream and inserts them into the OutputStreamstream so that they can be logged via the log sink connected to the latter. Save the Siddhi application. - Start the TemperatureReportingAppSiddhi application that you created and saved. To generate a message in the houstonKafka topic, follow the steps below: To run the Kafka command line client, issue the following command from the <KAFKA_HOME>directory. bin/kafka-console-producer.sh --broker-list localhost:9092 --topic houston When you are prompted to type messages in the console. Type the following in the command prompt. {"event":{ "temperature":23, "humidity":99}} This pushes a message to the Kafka Server. Then, the Siddhi application you deployed in the Streaming Integrator consumes this message. As a result, the Streaming Integrator log displays the following: Check the logs of Streaming Integrator Tooling. The Kafka message you generated is logged as follows: INFO {io.siddhi.core.stream.output.sink.LogSink} - Temperature Update : Event{timestamp=1603339705244, data=[23, 99], isExpired=false} Supported transports¶ The following are the supported transports to capture data in transit from messaging systems. Supported mappers¶ Mappers determine the format in which the event is received. For information about transforming events by changing the format in which the data is received/published, see Transforming Data. The following are the supported mappers when you receive data from messaging systems.Top
https://apim.docs.wso2.com/en/latest/use-cases/streaming-usecase/receiving-data-in-transit/
2021-09-17T05:16:04
CC-MAIN-2021-39
1631780054023.35
[array(['https://apim.docs.wso2.com/en/4.0.0/assets/img/streaming/receiving-data-in-transit/push-data-sources.png', 'receiving data from a data publisher'], dtype=object) array(['https://apim.docs.wso2.com/en/4.0.0/assets/img/streaming/receiving-data-in-transit/pull-data-sources.png', 'receiving data from a messaging system'], dtype=object) ]
apim.docs.wso2.com
Date: Mon, 09 Nov 2015 13:47:31 -0800 From: David Christensen <[email protected]> To: [email protected] Subject: Re: FreeBSD-10.2-RELEASE-amd64 encrypted ZFS root and swap Message-ID: <[email protected]> In-Reply-To: <CAESeg0z16yMRn15_5BX7DtyofaXB5CgC4f6Djtu9dkHaO212Ag@mail.gmail.com> References: <[email protected]> <CAESeg0z16yMRn15_5BX7DtyofaXB5CgC4f6Djtu9dkHaO212Ag@mail.gmail.com> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On 11/09/2015 12:32 PM, Scott Ballantyne wrote: > I have seen this many times when external USB or memory devices are > connected. It seems to confuse the ZFS boot loader. > > Perhaps that has happened in your case? Good idea -- I've also seen that. Verify: VGA cable PS/2 keyboard cable PS/2 mouse cable LAN cable speaker cable optical drive is empty floppy drive is empty no other hard drives installed reset the CMOS settings to defaults boot -- black screen with blinking cursor. Try again: Remove IDE HBA's boot -- black screen with blinking cursor. Any other ideas? David Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=66736+0+archive/2015/freebsd-questions/20151115.freebsd-questions
2021-09-17T04:41:06
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
Monitors Advanced settings: performance monitor collection Advanced settings: performance monitor collection Timeout . Time to wait before timing out the polling request. Retry . Number of times WhatsUp Gold attempts to send the command before the device is considered down. Determine Uniqueness by . Relevant to the Disk, Memory, and Interface performance monitors. Disk Index . Select to determine uniqueness by the disk, memory, or interface index. Disk Description . Select to determine the uniqueness by the disk, memory, or interface description. This prevents interruptions in data gathering if a re-index occurs. Poll Interface Traffic Counters . Used to determine if interface uses regular (RFC1213) or high capacity (RFC2233) counters.
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/39093.htm
2021-09-17T03:17:42
CC-MAIN-2021-39
1631780054023.35
[]
docs.ipswitch.com
Web Interface Engine The rendering of web interfaces in end-user applications is based on the Web Interface Engine. Depending on the engine version that is assigned to a web interface, the rendering can be different. RunMyProcess provides periodic updates for the engine. While it is possible to continue to use a previous version, you may wish to upgrade to a newer version at some point in time - this may be to take advantage of new features or for performance improvements. Before assigning a new engine version, you have to carefully check all changes between the old and the new engine version since they may have an effect on the web interface rendering. You need to ensure that the web interface is fully compatible with the engine you assign and that there are no unwanted side effects. Changes might be required to make the web interface compliant with the changes introduced by the upgrade. Upgrading the engine version for a web interface that is used in production must therefore be handled with utmost attention. A new web interface has automatically the latest engine version assigned. If you copy a web interface, the copy also uses the latest version which could be different from the one used by the original. - Web Interface Engine - Viewing and Upgrading the Web Interface Engine Version - Overview of Release Changes - 2018-11-07 - 5.19.2 - Custom Widgets - 2018-02-05 - 5.17.1 - JSON Arrays in Variables - 2017-10-12 - 5.16.0 - Validation Rules in Custom Widgets - 2017-06-29 - 5.15.0 - Filter in Reports - 2017-02-01 - Thibault v2 - Date Widget - 2016-11-23 - Thibault v1 - Disable Property Changed for Text Input - 2016-09-28 - Mongo v3 - Label for Checkbox Lists - 2016-02-03 - Wight v1 - JavaScript Report and Variable Visibility - 2015-11-12 - Java v2 - Array - 2015-06-03 - Lifou v3 - Post-Loaded Script of Reports - 2015-04-15 - Lifou v2 - Escape Character in Scripts - 2013-06-13 - Mogador v1 - Introduction of JQuery and CSS Change Viewing and Upgrading the Web Interface Engine Version To view the Web Interface Engine version that is used by a web interface, open the web interface in the WebModeler of DigitalSuite Studio, open the Settings pane, go to Settings > Advanced > Web Interface Engine. The Version field displays the current version. If an older version is displayed for the web interface, you can upgrade to the latest engine version. Before doing the upgrade, check the changes introduced with Web Interface Engine releases in the section below. Overview of Release Changes With the releases described below, changes were introduced that have an impact when upgrading the engine assigned to a web interface. For more detailed information, please contact the RunMyProcess support. 2018-11-07 - 5.19.2 - Custom Widgets Engine versions as of 5.19.2 display the correct value for input widgets in web interface instances for widgets that are nested in two or more custom widgets. This affects all types of widget for which the Initialize option is set in order to initalize the widget with a specified value. 2018-02-05 - 5.17.1 - JSON Arrays in Variables Engine versions as of 5.17.1 now validate variables with patterns using square brackets ( [...]) before submitting a form. If the widget holding the variable is wrapped by an array widget, only the value that corresponds to this widget index is validated. Otherwise, the whole JSON value is validated against the configured pattern. 2017-10-12 - 5.16.0 - Validation Rules in Custom Widgets This release fixes an issue that prevented validation rules from working when defined within a custom widget. 2017-06-29 - 5.15.0 - Filter in Reports This release fixes an issue that prevented filter settings in report definitions from being effective. The MP_Report.add(Measure)Filter JavaScript function is now available if a user filter is set. 2017-02-01 - Thibault v2 - Date Widget This release fixes an issue which prevented the web interface designer from manually changing the hour in date widgets. 2016-11-23 - Thibault v1 - Disable Property Changed for Text Input Due to a W3C specification update, this release introduces a change in widget properties. Non-active widgets of type Text Input, Date Input, URL Input, Captcha, and Upload no longer allowed selection, copy/paste, and error display in specific web browsers. In web interfaces using the Thibault v1 or a more recent engine, widgets of these types are now set to read-only, i.e. the Editable property is not set 2016-09-28 - Mongo v3 - Label for Checkbox Lists This release fixes an issue that prevented the label in checkbox lists to be displayed above the checkboxes. Selecting Above as Label position in the settings of all checkbox list widgets does now have the correct effect. 2016-02-03 - Wight v1 - JavaScript Report and Variable Visibility This release reduces the scope of variables and functions defined in the Loading data script which is used in JavaScript reports. Variables and functions defined within the script are therefore no longer accessible from outside of the script. When upgrading from an older engine version, you have to make sure that there are no references to these variables and functions from outside the script. 2015-11-12 - Java v2 - Array Engine versions as of Java v2 support the following syntax in dynamic rules: [[myarray.column1]][P_index] Web interfaces using the Java v2 or a more recent engine need to be tested to ensure that the syntax change does not result in any problem. 2015-06-03 - Lifou v3 - Post-Loaded Script of Reports Web interfaces using the Lifou v3 or a more recent engine execute the post-loaded script of reports each time data is loaded into the report (e.g. when computing a filter, navigating from page to page, or refreshing the data via JavaScript or manually). Web interfaces using the Lifou v2 or an older engine, however, execute the post-loaded script only when loading the data for the first time. 2015-04-15 - Lifou v2 - Escape Character in Scripts Web interfaces using the Lifou v2 or a more recent engine use an escape rule in embedded scripts that is different from the rule used in older engines. A simple backslash ( \) is used to escape special characters instead of a double backslash ( \\). This applies to all scripted properties of widgets, for example scripts, URLs, or regular expressions. The following example shows the pattern in an email widget. The email widget is simply text input that uses a regular expression to check if the information that is filled in is indeed an email. Pattern before the Lifou v2 release: Pattern with the Lifou v2 release: 2013-06-13 - Mogador v1 - Introduction of JQuery and CSS Change Web interfaces using the Mogador v1 or a more recent engine require JQuery to work correctly. This means that when creating a new web interface from an existing one, for example by copying it, you have to add the JQuery framework if the web interface source was created with an older engine. To add the JQuery library, open the web interface in the WebModeler of DigitalSuite Studio, open the Settings pane, and go to CSS & JS > JS. In the JS section, add the standard jQuery library available on the DigitalSuite platform. This will then be used in the header and footer of your web interace. Please be aware that the look and feel of a web interface will be significantly different when using the Mogador v1 or a more recent engine since the CSS was completely rewritten for the Mogador v1 release. Please give details of the problem
https://docs.runmyprocess.com/Operator_Guide_DigitalSuite/Web_Interface_Engine/
2021-09-17T04:26:01
CC-MAIN-2021-39
1631780054023.35
[]
docs.runmyprocess.com
Aggregation engine The aggregation engine combines streams of data into a single value using the selected aggregation function. Built-in aggregation functions include options like count, sum, mean, median, percentile, first/last values, min/max, etc. You can also define your own custom metrics using our visual UI and use them as the aggregation function. The engine takes time range, aggregation function, compare groups, and filter conditions as inputs. It figures out which columns to scan based on the stored and virtual columns used in the query. The aggregation engine is also an important part of other more complex queries. It's usually the engine that runs last and aggregates values from the physical and virtual columns generated earlier to deliver the requested results.
https://docs.scuba.io/glossary/Aggregation-engine.1302233144.html
2021-09-17T05:02:59
CC-MAIN-2021-39
1631780054023.35
[]
docs.scuba.io
Resetting a Transformation If you made too many transformations and it is not going where you want it to go, or simply because you want to reset a pose you are reusing from another animation, you can reset your character's position. Once the layers are positioned in the Camera view, you can easily return them to their original position. There are three different ways to reset a transformation. Using the Reset command, you can reset the value of the selected element to the initial value of the active tool. For example, if the Rotate tool is active, the transformation angle will be reset to 0 and if the Transform tool is active, all the transformation values will be reset. The Reset All option resets all transformations on the current frame in a selected layer. Your keyframe will remain, but all the values will return to the default position. All transformation are reset regardless of the tool you are using. The Reset All Except Z option resets all the transformations on the current frame except the Z position. This is useful when doing cut-out animation. Cut-out puppets often have a particular Z ordering for the different views of a character. You might want to reset the transformation, but not necessarily the Z position. In the Timeline view, you can also use the Clear All Values command to reset all transformation values on the selected layers. Right-click on the selected layers and select Layers > Clear All Values. The selected layer(s) return to their original position. The selected layer(s) return to their original position. The selected layer(s) return to their original position, except for the Z values.
https://docs.toonboom.com/help/harmony-11/workflow-network/Content/_CORE/_Workflow/022_Cut-out_Animation/064_H1_Resetting_a_Transformation.html
2021-09-17T02:58:55
CC-MAIN-2021-39
1631780054023.35
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Tricks¶ This section of the book discusses a couple of tricks that can be exploited to either speed up computations, or save on RAM. However, there is probably no silver bullet, and you have to evaluate your code in terms of execution speed (if the execution is time critical), or RAM used. You should also keep in mind that, if a particular code snippet is optimised on some hardware, there is no guarantee that on another piece of hardware, you will get similar improvements. Hardware implementations are vastly different. Some microcontrollers do not even have an FPU, so you should not be surprised that you get significantly different benchmarks. Just to underline this statement, you can study the collection of benchmarks. Use an ndarray, if you can¶ Many functions in ulab are implemented in a universal fashion, meaning that both generic micropython iterables, and ndarrays can be passed as an argument. E.g., both from ulab import numpy as np np.sum([1, 2, 3, 4, 5]) and from ulab import numpy as np a = np.array([1, 2, 3, 4, 5]) np.sum(a) will return the micropython variable 15 as the result. Still, np.sum(a) is evaluated significantly faster, because in np.sum([1, 2, 3, 4, 5]), the interpreter has to fetch 5 micropython variables, convert them to float, and sum the values, while the C type of a is known, thus the interpreter can invoke a single for loop for the evaluation of the sum. In the for loop, there are no function calls, the iteration simply walks through the pointer holding the values of a, and adds the values to an accumulator. If the array a is already available, then you can gain a factor of 3 in speed by calling sum on the array, instead of using the list. Compared to the python implementation of the same functionality, the speed-up is around 40 (again, this might depend on the hardware). On the other hand, if the array is not available, then there is not much point in converting the list to an ndarray and passing that to the function. In fact, you should expect a slow-down: the constructor has to iterate over the list elements, and has to convert them to a numerical type. On top of that, it also has to reserve RAM for the ndarray. Use a reasonable dtype¶ Just as in numpy, the default dtype is float. But this does not mean that that is the most suitable one in all scenarios. If data are streamed from an 8-bit ADC, and you only want to know the maximum, or the sum, then it is quite reasonable to use uint8 for the dtype. Storing the same data in float array would cost 4 or 8 times as much RAM, with absolutely no gain. Do not rely on the default value of the constructor’s keyword argument, and choose one that fits! Beware the axis!¶ Whenever ulab iterates over multi-dimensional arrays, the outermost loop is the first axis, then the second axis, and so on. E.g., when the sum of a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]], dtype=uint8) is being calculated, first the data pointer walks along [1, 2, 3, 4] (innermost loop, last axis), then is moved back to the position, where 5 is stored (this is the nesting loop), and traverses [5, 6, 7, 8], and so on. Moving the pointer back to 5 is more expensive, than moving it along an axis, because the position of 5 has to be calculated, whereas moving from 5 to 6 is simply an addition to the address. Thus, while the matrix b = array([[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]], dtype=uint8) holds the same data as a, the summation over the entries in b is slower, because the pointer has to be re-wound three times, as opposed to twice in a. For small matrices the savings are not significant, but you would definitely notice the difference, if you had a = array(range(2000)).reshape((2, 1000)) b = array(range(2000)).reshape((1000, 2)) The moral is that, in order to improve on the execution speed, whenever possible, you should try to make the last axis the longest. As a side note, numpy can re-arrange its loops, and puts the longest axis in the innermost loop. This is why the longest axis is sometimes referred to as the fast axis. In ulab, the order of the axes is fixed. Reduce the number of artifacts¶ Before showing a real-life example, let us suppose that we want to interpolate uniformly sampled data, and the absolute magnitude is not really important, we only care about the ratios between neighbouring value. One way of achieving this is calling the interp functions. However, we could just as well work with slices. # code to be run in CPython a = array([0, 10, 2, 20, 4], dtype=np.uint8) b = np.zeros(9, dtype=np.uint8) b[::2] = 2 * a b[1::2] = a[:-1] + a[1:] b //= 2 b array([ 0, 5, 10, 6, 2, 11, 20, 12, 4], dtype=uint8) b now has values from a at every even position, and interpolates the values on every odd position. If only the relative magnitudes are important, then we can even save the division by 2, and we end up with # code to be run in CPython a = array([0, 10, 2, 20, 4], dtype=np.uint8) b = np.zeros(9, dtype=np.uint8) b[::2] = 2 * a b[1::2] = a[:-1] + a[1:] b array([ 0, 10, 20, 12, 4, 22, 40, 24, 8], dtype=uint8) Importantly, we managed to keep the results in the smaller dtype, uint8. Now, while the two assignments above are terse and pythonic, the code is not the most efficient: the right hand sides are compound statements, generating intermediate results. To store them, RAM has to be allocated. This takes time, and leads to memory fragmentation. Better is to write out the assignments in 4 instructions: # code to be run in CPython b = np.zeros(9, dtype=np.uint8) b[::2] = a b[::2] += a b[1::2] = a[:-1] b[1::2] += a[1:] b array([ 0, 10, 20, 12, 4, 22, 40, 24, 8], dtype=uint8) The results are the same, but no extra RAM is allocated, except for the views a[:-1], and a[1:], but those had to be created even in the origin implementation. Upscaling images¶ And now the example: there are low-resolution thermal cameras out there. Low resolution might mean 8 by 8 pixels. Such a small number of pixels is just not reasonable to plot, no matter how small the display is. If you want to make the camera image a bit more pleasing, you can upscale (stretch) it in both dimensions. This can be done exactly as we up-scaled the linear array: # code to be run in CPython b = np.zeros((15, 15), dtype=np.uint8) b[1::2,::2] = a[:-1,:] b[1::2,::2] += a[1:, :] b[1::2,::2] //= 2 b[::,1::2] = a[::,:-1:2] b[::,1::2] += a[::,2::2] b[::,1::2] //= 2 Up-scaling by larger numbers can be done in a similar fashion, you simply have more assignments. There are cases, when one cannot do away with the intermediate results. Two prominent cases are the where function, and indexing by means of a Boolean array. E.g., in # code to be run in CPython a = array([1, 2, 3, 4, 5]) b = a[a < 4] b array([1, 2, 3]) the expression a < 4 produces the Boolean array, # code to be run in CPython a < 4 array([ True, True, True, False, False]) If you repeatedly have such conditions in a loop, you might have to peridically call the garbage collector to remove the Boolean arrays that are used only once. # code to be run in CPython
https://micropython-ulab.readthedocs.io/en/stable/ulab-tricks.html
2021-09-17T02:55:20
CC-MAIN-2021-39
1631780054023.35
[]
micropython-ulab.readthedocs.io
Uno stream IMAP restituito da imap_open(). Returns the number of recent messages in the current mailbox, as an integer. If you would like to skip the update of recent messages use the OP_READONLY flag when you open to check for new messages.... If someone want to know number of unreaded messages just use imap_mailboxmsginfo).
http://docs.php.net/manual/it/function.imap-num-recent.php
2016-09-25T00:23:07
CC-MAIN-2016-40
1474738659680.65
[]
docs.php.net
As mentioned already Marble always displays a dynamic scale bar on the lower left to estimate distances on the map. Together with the windrose in the top right corner these overlays are provided for better orientation. But there's more: Marble allows you to measure distances between two or more points on earth. To do so click the respective points in correct order on the globe using the mouse button. On each click a popup menu will appear which allows you to add a measure point (Add Measure Point) or to remove all measure points altogether (Remove Measure Points): Once you have added at least two measure points, the total distance will be displayed in the top left corner of the map. Marble will assume a spherical earth for all measurements which should be accurate enough for most cases. Tip Displaying of distances and bearings for the measured segments can be configured using Measure Tool configuration dialog.
https://docs.kde.org/trunk5/en/kdeedu/marble/measure-distances.html
2016-09-25T00:17:14
CC-MAIN-2016-40
1474738659680.65
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['measure-1.png', 'Marble measuring distances'], dtype=object)]
docs.kde.org
Buffers and Memoryview Objects¶ Python only expose its contents via the old-style buffer interface. This limitation does not apply to Python 3, where memoryview objects can be constructed from arrays, too.. The new-style Py_buffer struct¶ Py_buffer¶ - Py_ssize_t len The total length of the memory in bytes. - const char * format A NULL terminated string in structmodule style syntax giving the contents of the elements available through the buffer. If this is NULL, "B"(unsigned bytes) is assumed. - int ndim¶ The number of dimensions the memory represents as a multi-dimensional array. If it is 0, stridesand suboffsetsmust be NULL. - Py_ssize_t * shape¶ An array of Py_ssize_ts the length of ndimgiving the shape of the memory as a multi-dimensional array. Note that ((*shape)[0] * ... * (*shape)[ndims-1])*itemsizeshould be equal to len. - Py_ssize_t * strides¶ An array of Py_ssize_ts the length of ndimgiving the number of bytes to skip to get to a new element in each dimension. - Py_ssize_t * suboffsets¶ field must be NULL (the default value).; } - Py_ssize_t itemsize¶. - void * internal¶. MemoryView objects¶ New in version 2.7. A memoryview object exposes the new C level buffer interface as a Python object which can then be passed around like any other object. - PyObject * PyMemoryView_FromObject(PyObject *obj)¶ Create a memoryview object from an object that defines the new buffer interface. - PyObject * PyMemoryView_FromBuffer(Py_buffer *view)¶ Create a memoryview object wrapping the given buffer-info structure view. The memoryview object then owns the buffer, which means you shouldn’t try to release it yourself: it will be released on deallocation of the memoryview. - int PyMemoryView_Check(PyObject *obj)¶ Return true if the object obj is a memoryview object. It is not currently allowed to create subclasses of memoryview. Old-style buffer objects¶. - PyTypeObject PyBuffer_Type¶ The instance of PyTypeObjectwhich represents the Python buffer type; it is the same object as bufferand types.BufferTypein the Python layer. . - int Py_END_OF_BUFFER¶ This constant may be passed as the size parameter to PyBuffer_FromObject()or PyBuffer_FromReadWriteObject(). It indicates that the new PyBufferObjectshould refer to base object from the specified offset to the end of its exported buffer. Using this enables the caller to avoid querying the base object for its length. - int PyBuffer_Check(PyObject *p)¶ Return true if the argument has type PyBuffer_Type. - PyObject* PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size)¶ - Return value: New reference. Return a new read-only buffer object. This raises TypeErrorif base doesn’t support the read-only buffer protocol or doesn’t provide exactly one buffer segment, or it raises ValueErroriftype for offset and size. This might require changes in your code for properly supporting 64-bit systems. - PyObject* PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size)¶ - Return value: New reference. Return a new writable buffer object. Parameters and exceptions are similar to those for PyBuffer_FromObject(). If the base object does not export the writeable buffer protocol, then TypeErroris raised. Changed in version 2.5: This function used an inttype for offset and size. This might require changes in your code for properly supporting 64-bit systems. - PyObject* PyBuffer_FromMemory(void *ptr, Py_ssize_t size)¶ - Return value: New reference. Return a new read-only buffer object that reads from a specified location in memory, with a specified size. The caller is responsible for ensuring that the memory buffer, passed in as ptr, is not deallocated while the returned buffer object exists. Raises ValueErrorif size is less than zero. Note that Py_END_OF_BUFFERmay not be passed for the size parameter; ValueErrorwill be raised in that case. Changed in version 2.5: This function used an inttype for size. This might require changes in your code for properly supporting 64-bit systems. - PyObject* PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size)¶ - Return value: New reference. Similar to PyBuffer_FromMemory(), but the returned buffer is writable. Changed in version 2.5: This function used an inttype for size. This might require changes in your code for properly supporting 64-bit systems. - PyObject* PyBuffer_New(Py_ssize_t size)¶ - Return value: New reference. Return a new writable buffer object that maintains its own memory buffer of size bytes. ValueErroris returned if size is not zero or positive. Note that the memory buffer (as returned by PyObject_AsWriteBuffer()) is not specifically aligned. Changed in version 2.5: This function used an inttype for size. This might require changes in your code for properly supporting 64-bit systems.
https://docs.python.org/2/c-api/buffer.html
2016-09-25T00:16:44
CC-MAIN-2016-40
1474738659680.65
[]
docs.python.org
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up Ch. NR 21 Note Note: Chapter NR 21 as it existed on February 28, 1979, was repealed and a new chapter NR 21 was created effective March 1, 1979. Corrections made under s. 13.93 (2m) (b) 7., Stats., Register, January, 1999, No. 517 . NR 21.01 NR 21.01 Purpose. NR 21.01(1) (1) This chapter, along with other applicable rules and statutes, regulates fishing in the Wisconsin-Minnesota boundary waters. NR 21.01(2) (2) The rules contained in this chapter are not intended to, nor do they authorize, the sale or introduction into interstate commerce for purposes of human consumption or use fish taken from the Wisconsin-Minnesota boundary waters which fail to meet or comply with food and drug administration (FDA) standards. NR 21.01 History History: Cr. Register, February, 1979, No. 278 , eff. 3-1-79. NR 21.015 NR 21.015 License waiver. NR 21.015(1) (1) On the first Saturday and consecutive Sunday of June each year, no fishing license is required to fish the Wisconsin-Minnesota boundary waters. NR 21.015(2) (2) The license waiver of sub. (1) does not apply to commercial fishing license requirements. NR 21.015 History History: Cr. Register, April, 1987, No. 376 , eff. 5-1-87; am. (1), Register, April, 1989, No. 400 , eff. 5-1-89; am. (1), Register, May, 1995, No. 473 , eff. 6-1-95. NR 21.02 NR 21.02 Definitions. Except as otherwise specifically defined in the statutes, the following terms, for the purposes of this chapter, are defined as follows: NR 21.02(1) (1) "Bait net" has the meaning given in s. NR 22.02 (1) . NR 21.02(2) (2) "Buffalo net" has the meaning given in s. NR 22.02 (2) . NR 21.02(3) (3) "Closed season" means that period of the year not included in the open season for each species of fish as provided in this chapter. NR 21.02(4) (4) "Commercial fish" means rough and detrimental fish as defined by this chapter, shovelnose (hackleback) sturgeon 25 inches long or longer only when taken on setlines, catfish 15 inches long or longer or dressed catfish at least 12 inches long and bullheads of any length when taken with commercial fishing gear and all taken while fishing under a commercial fishing license on the Mississippi River. NR 21.02(5) (5) "Commercial fishing gear" or "commercial gear" is that equipment specifically authorized for use in commercial fishing by this chapter. NR 21.02(6) (6) "Commercial fishing licenses" means those licenses issued pursuant to ss. 29.523 and 29.533 , Stats. NR 21.02(7) (7) "Constant net attendance" means the continuous presence of a commercial fisher who remains on the water or ice within sight of his or her nets at all times without the aid of vision magnifying devices such as binoculars or spotting telescopes, except that after a net has been drawn, a licensed commercial fisher may temporarily leave the net to transport fish taken from the net to a landing. NR 21.02(8) (8) "Daily bag limit" has the meaning given in s. NR 20.03 (8) for fish and for turtles means the maximum number specified by rule of a turtle species which may be reduced to a person's possession in a single day. NR 21.02(9) (9) "Dead set gill net" means a gill net that is set and allowed to catch fish without being moved and without constant net attendance by the operator. NR 21.02(10) (10) "Detrimental fish" means all species of Asian carp, including bighead, silver, grass and black carp. NR 21.02(11) (11) "Dressed fish" means a fish with the head and viscera removed but the tail on. NR 21.02(12) (12) "Drift net" means a net of any type that is not staked or anchored at one or both ends and is free to drift or move under the influence of wind or water current, whether or not the net has constant net attendance. NR 21.02(13) (13) "Drive netting" means a method of operating a net so that the operator is in constant net attendance and uses boats, motors, oars, plungers or other devices to create sound or vibrations in the water so as to chase, move or drive fish in the direction of the net. NR 21.02(14) (14) "Drive set gill net" means a gill net that is operated without being moved and has constant net attendance. NR 21.02(15) (15) "Fisher" means any person engaged in fishing. NR 21.02(16) (16) "Frame net" or "fyke net" has the meaning given in s. NR 22.02 (15) . NR 21.02(17) (17) "Gill net" has the meaning found in s. 29.522 (2) (b) , Stats. NR 21.02(18) (18) "Hooking" means, as used in s. NR 21.13 , any activity which utilizes a dull-pointed, metal, barbless hook attached to a staff to remove a turtle from a body of water. NR 21.02(19) (19) "Hoop net" has the meaning given in s. NR 22.02 (18) . NR 21.02(20) (20) "Lead" has the meaning given in s. NR 22.02 (19) . NR 21.02(21) (21) "Length" for the purposes of measuring a fish, unless otherwise specified, means the distance measured in a straight line from the tip of the snout to the outermost end of the tail with the tail or caudal fin fully extended. NR 21.02(22) (22) "Lower pool 7" means that part of the Mississippi River bounded on the north by an imaginary line at a compass bearing of 65 1/2 degrees from river mile marker 709.5 to the north end of the Burlington Northern and Santa Fe main railroad track bridge that crosses the Black River; on the east by the Burlington Northern and Santa Fe main railroad tracks; on the south by the U.S. army corps of engineers lock and dam 7 dike; and on the west by the state line boundary between Wisconsin and Minnesota. NR 21.02(23) (23) "Lower pool 8" means that part of the Mississippi River bounded on the north by state highway 16; on the east by the Burlington Northern and Santa Fe main railroad tracks; on the south by the U.S. army corps of engineers lock and dam 8 dike; and on the west by the state line boundary between Wisconsin and Minnesota. NR 21.02(24) (24) "Minnows" means all species defined as such in s. 29.001 , Stats., and bullheads not exceeding four inches in length. NR 21.02(25) (25) "Mississippi River" means all waters lying between the Chicago, Milwaukee, St. Paul and Pacific railroad tracks on the Minnesota side of the river, and the Burlington Northern and Santa Fe railroad tracks lying on the Wisconsin side of the river. Mississippi River includes Lake Onalaska and Lake Pepin. NR 21.02(26) (26) "Possession limit" has the meaning given in s. NR 20.03 (31) for fish, except in s. NR 21.13 where "possession limit" means the maximum number of a turtle species or group of turtle species set in s. NR 21.13 which may be possessed by a person at any time. NR 21.02(27) (27) "Rough fish" means all species defined as such in s. 29.001 , Stats., and detrimental fish including amur carp which is also known as grass carp (Ctenopharyngodon idella). NR 21.02(28) (28) "Seine" has the meaning found in s. 29.522 (3) , Stats. NR 21.02(29) (29) "Seine haul" means a single setting, retrieval and emptying of a seine, including placement of the net, driving fish in the direction of the net, drawing or lifting the net, or both, to entrap fish by retrieving one or both ends of the net, bagging the fish in the net, sorting and removal of the game fish in the net and removal of all fish from the net. NR 21.02 Note Note: A single seine haul may take more than one day to complete from the time the net is set until all fish have been removed from the net. NR 21.02(30) (30) "Setline" has the meaning given in s. NR 20.03 (36) and is also commonly known as a trotline. NR 21.02(31) (31) "Slat net" or "basket trap" has the meaning given in s. NR 22.02 (28) . NR 21.02(32) (32) "Sport fishing" or "angling" means any fishing, including the methods commonly known as hook and line fishing or angling, which is conducted without a commercial fishing license and with other than commercial fishing gear, but does not include the taking of turtles. NR 21.02(33) (33) "Stretch measure" means the extension measure of net mesh size whenever the size of mesh of a net is specified and is the distance between the extreme angles of any single mesh with the mesh fully stretched. NR 21.02(34) (34) "Supervisor" means any department employee assigned or designated to oversee fishing activities conducted under this chapter. NR 21.02(35) (35) "Trammel net" has the meaning found in s. 29.522 (2) (a) , Stats. NR 21.02(36) (36) "Turtle" means a reptile having horny, toothless jaws and a body enclosed in a bony or leathery shell into which the head, limbs and tail may be partially or fully withdrawn, and includes parts of turtles and turtle eggs. NR 21.02(37) (37) "Upper pool 7" means that part of the Mississippi River bounded on the north by the U.S. army corps of engineers lock and dam 6 dike; on the east by the Burlington Northern and Santa Fe main railroad tracks; on the south by an imaginary line at a compass bearing of 651/2 degrees from river mile marker 709.5 to the north end of the Burlington Northern and Santa Fe main railroad track bridge that crosses the Black River; and on the west by the state line boundary between Wisconsin and Minnesota. NR 21.02(38) (38) "Upper pool 8" means that part of the Mississippi River bounded on the north by the U.S. army corps of engineers lock and dam 7 dike, on the east by the Burlington Northern and Santa Fe main railroad tracks, on the south by state highway 16, and on the west by the state line boundary between Wisconsin and Minnesota. NR 21.02(39) (39) "Wisconsin-Minnesota boundary waters" for sport fishing purposes, means all waters of the Mississippi River, Lake St. Croix, the St. Croix River from the Burlington Northern and Santa Fe railroad bridge at Prescott, as far in a northerly direction as the St. Croix River forms and acts as boundary waters between the states of Wisconsin and Minnesota, and the St. Louis River from the north-south Wisconsin-Minnesota boundary line downstream to the Lake Superior beach line in the Superior entry including St. Louis Bay, Superior Bay, Little Pokegama Bay, Pokegama Bay upstream to highway 105, Kimballs Bay, Howard Bay, Allouez Bay and all other Bays connected to the St. Louis River. For the purpose of taking turtles and commercial fishing, "Wisconsin-Minnesota boundary waters" means all waters from the Burlington Northern and Sante Fe railroad tracks on the east side of the Mississippi River and from the east bank of the St. Croix River in Wisconsin, extending west to the state line between Wisconsin and Minnesota. NR 21.02 History History: Cr. Register, February, 1979, No. 278 , eff. 3-1-79; am. (2) Register, April, 1983, No. 328 , eff. 5-1-83; r. and recr. (9), Register, February, 1991, No. 422 , eff. 3-1-91; am. (2), Register, December, 1993, No. 456 , eff. 1-1-94; cr. (4m), (6m) and (15m), am. (10), Register, February, 1997, No. 494 , eff. 3-1-97; am. (2), (4m), (9) and (16) and cr. (4g), (4p), (5e), (5m), (5t), (6g), (7m), (7r), (11f), (11m), (15p) and (15v), Register, October, 1998, No. 514 , eff. 11-1-98; am. (16), Register, May, 1999, No. 521 , eff. 6-1-99; CR 04-024 : am. (4m) and (10) Register December 2004 No. 588 , eff. 1-1-05; CR 10-053 : r. and recr. Register December 2010 No. 660 , eff. 1-1-11; renumbering of (15) and (16) and correction in (30) made under s. 13.92 (4) (b) 1. and 7. , Stats., Register December 2010 No. 660 . subch. I of ch. NR 21 Subchapter I — Sport Fishing NR 21.03 NR 21.03 Reciprocity, sport fishing or spearing and dip netting. All residents of Wisconsin and Minnesota holding a resident fishing license from their respective states or residents other than Wisconsin and Minnesota holding an angling or sport fishing license issued by either state, may fish in any of the waters of the Mississippi river lying between the Burlington Northern and Santa Fe railroad tracks on the Wisconsin side of the river and the Chicago, Milwaukee, St. Paul and Pacific railroad tracks on the Minnesota side of the river, including all sloughs and backwaters, bays and newly-extended water areas connected with the main channel of the Mississippi river by a channel which is navigable at periods when the water is approximately equal to normal pool elevation as created by the U.S. department of the army, and in the waters of Lake St. Croix, and the St. Croix river and the St. Louis river as defined in s. NR 21.02 (39) . This reciprocity applies only to sport fishing, spearing, dip netting, and the taking of minnows and crayfish for such fishing. NR 21.03 History History: Cr. Register, February, 1979, No. 278 , eff. 3-1-79; am. Register, October, 1998, No. 514 , eff. 11-1-98; CR 10-053 : am. Register December 2010 No. 660 , eff. 1-1-11. NR 21.04 NR 21.04 Sport fishing; seasons and limits. All regulations applicable to sport fishing are as follows unless expressly provided elsewhere in this chapter or the law. (23.11, 29.041) - See PDF for table suspended or revoked may not act as a helper or crew member for another licensee during the period of suspension or revocation. Down Down /code/admin_code/nr/001/21 true administrativecode /code/admin_code/nr/001/21/02/19 administrativecode/NR 21.02(19) administrativecode/NR 21.02?
http://docs.legis.wisconsin.gov/code/admin_code/nr/001/21/02/19
2013-05-18T18:33:02
CC-MAIN-2013-20
1368696382705
[]
docs.legis.wisconsin.gov
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up ATCP 127.34 Disclosures prior to sale. ATCP 127.36 Prize promotions. ATCP 127.38 Unauthorized payment. ATCP 127.40 Delivering ordered goods. ATCP 127.42 Credit card laundering. ATCP 127.44 Misrepresentations. ATCP 127.46 Prohibited practices. ATCP 127.48 Recordkeeping. ATCP 127.50 Assisting violations. Subchapter IV — Face-to-Face Solicitations ATCP 127.60 Definitions. ATCP 127.62 Opening disclosures. ATCP 127.64 Disclosures prior to sale. ATCP 127.66 Prize promotions. ATCP 127.68 Unauthorized payment. ATCP 127.70 Credit card laundering. ATCP 127.72 Misrepresentations. ATCP 127.74 Prohibited practices. ATCP 127.76 Recordkeeping. ATCP 127.78 Assisting violations. Subchapter V — Telephone Solicitations; No-Call List ATCP 127.80 Definitions. ATCP 127.81 Telephone solicitors; registration. ATCP 127.82 No-call list. ATCP 127.83 Telephone solicitation practices. ATCP 127.84 Record keeping. Ch. ATCP 127 Note Note: This chapter is adopted under authority of s. 100.20 (2) , Stats., and is administered by the Wisconsin department of agriculture, trade and consumer protection. Violations of this chapter may be prosecuted under s. 100.20 (6) and s. 100.26 (3) or (6) , Stats. A person who suffers a monetary loss because of a violation of this chapter may sue the violator directly under s. 100.20 (5) , Stats., and may recover twice the amount of the loss, together with costs and reasonable attorneys' fees. Subchapter V is also adopted under authority of s. 100.52 , Stats. A telephone solicitation to a residential telephone customer included on the "no-call" list under subch. V does not, by itself, result in a monetary loss for which the customer may seek recovery under s. 100.20(5) , Stats., unless the residential telephone customer sustains an actual monetary loss as a result of another violation of this chapter. Ch. ATCP 127 Note Note: Chapter Ag 127 was renumbered ch. ATCP 127 under s. 13.93 (2m) (b) 1., Stats., Register, April, 1993, No. 448 . Chapter ATCP 127 as it existed on July 31, 1999 was repealed and a new chapter ATCP 127 was created effective August 1, 1999. subch. I of ch. ATCP 127 Subchapter I — Definitions ATCP 127.01 ATCP 127.01 Definitions. In this chapter: ATCP 127.01(1) (1) "Acquirer" means a financial institution or other person who, under a license or authorization granted by a credit card system operator, authorizes merchants to honor credit cards and submit credit card sales drafts for payment through the credit card system. ATCP 127.01(2) (2) "Consumer" means an individual to whom a seller advertises, offers to sell, sells or promotes the sale of consumer goods or services. "Consumer" does not include an individual who purchases consumer goods or services in a business capacity, or for resale to others. ATCP 127.01(3) (3) "Consumer goods or services" means goods or services typically used for personal, family or household purposes. "Consumer goods or services" includes personal investment opportunities, personal business opportunities and personal training courses but does not include any of the following: ATCP 127.01(3)(a) (a) Investment opportunities, business opportunities and training courses when offered to a business, rather than a consumer. ATCP 127.01(3)(b) (b) Real estate, other than cemetery lots or timeshares as defined in s. 707.02 (24) , Stats. ATCP 127.01(3)(c) (c) Pay-per-call services sold in compliance with s. 196.208 , Stats. ATCP 127.01(3)(d) (d) A newspaper subscription that the consumer may cancel at any time without penalty. ATCP 127.01(4) (4) "Credit" means the right granted by a creditor to a debtor to defer payment of debt or to incur debt and defer its payment. ATCP 127.01(5) (5) "Credit card" means any card or other device which entitles an authorized holder to obtain goods, services or other things of value on credit. ATCP 127.01(6) (6) "Credit card sales draft" means any record or evidence of a credit card transaction. ATCP 127.01(7) (7) "Credit card system" means the system through which credit card transactions, using credit cards issued or licensed by the credit card system operator, are processed for payment. ATCP 127.01(8) (8) "Credit card system operator" means a person who operates a credit card system, or who licenses others to operate a credit card system. ATCP 127.01(9) (9) "Department" means the state of Wisconsin department of agriculture, trade and consumer protection. ATCP 127.01(10) (10) "Disclose" means to make a clear and conspicuous statement which is reasonably designed to be noticed and readily understood by the consumer. ATCP 127.01(11) (11) "Individual" means a natural person. ATCP 127.01(12) (12) "Investment opportunity" means anything, tangible or intangible, that is offered, sold or traded based wholly or in part on representations, either express or implied, about past, present or future income, profit or appreciation. "Investment opportunity" does not include a security sold in compliance with ch. 551 , Stats. , or a franchise investment sold in compliance with ch. 553 , Stats. ATCP 127.01(13) (13) "Mass advertisement" means a solicitation which a seller publishes or makes accessible to an unrestricted mass audience. "Mass advertisement" includes a solicitation published in a newspaper, magazine, radio broadcast, television broadcast or internet home page. "Mass advertisement" does not include a solicitation which a seller addresses to an individual consumer, to a consumer's residence, or to a gathering of consumers invited by means of telephone, mail or face-to-face solicitations under this chapter. ATCP 127.01(14) (14) "Merchant" means a person who is authorized, under a written agreement with an acquirer, to honor credit cards and submit credit card sales drafts to the acquirer for payment and processing through the credit card system. ATCP 127.01(15) (15) "Person" means an individual, corporation, partnership, cooperative, limited liability company, trust or other legal entity. ATCP 127.01(16) (16) "Prize promotion" means any of the following: ATCP 127.01(16)(a) (a) A sweepstakes or other game of chance. ATCP 127.01(16)(b) (b) A seller's express or implied representation that a consumer has won, has been selected to receive, may be eligible to receive, or may have a chance to receive a prize. ATCP 127.01(16)(c) (c) Any communication from a seller to a consumer in which the seller is required to give the consumer a prize notice under s. 100.171 , Stats. ATCP 127.01(17) (17) "Purchase" means to buy or lease consumer goods or services. ATCP 127.01(18) (18) "Purchase contract" means an agreement to purchase consumer goods or services, regardless of whether that agreement is subject to a later right of cancellation. "Purchase contract" does not include the following agreements, but does include a purchase commitment which arises under any of those agreements as a result of the consumer's subsequent action or omission: ATCP 127.01(18)(a) (a) An agreement authorizing the trial delivery of consumer goods or services which the consumer has not yet agreed to purchase, provided that the agreement includes no minimum purchase requirement. ATCP 127.01(18)(b) (b) A negative option plan that is covered by and complies with 16 CFR 425 . ATCP 127.01 Note Note: Some direct marketers offer trial delivery plans in which the consumer agrees to receive trial deliveries of goods which the consumer has not yet agreed to purchase. Under these agreements, a consumer is typically free to reject or return any trial delivery without purchasing that delivery. But under the trial delivery agreement, the seller may bill the consumer for the delivered goods if the consumer fails to reject or return the delivery within a specified time. Although the consumer's initial agreement to receive trial deliveries is not itself a "purchase contract" (unless it includes a minimum purchase commitment), the consumer effectively enters into a "purchase contract" for a particular delivery when the consumer fails to return or reject that delivery according to the trial delivery agreement. ATCP 127.01(19) (19) "Sale" means the passing of an ownership or leasehold interest in consumer goods or services to a consumer for a price. ATCP 127.01(20) (20) "Sell" means to engage in the sale of consumer goods or services, or to accept payment pursuant to a purported sale of consumer goods or services. ATCP 127.01(21) (21) "Seller" means a person, other than a bank, savings bank, savings and loan association, credit union, insurance company, public utility or telecommunications carrier engaged in exempt activities under s. 93.01 (1m) , Stats., who is engaged in the business of selling, offering to sell, or promoting the sale of consumer goods or services to consumers. "Seller" includes all of the following: ATCP 127.01(21)(a) (a) A person who accepts payment for a purported sale of consumer goods or services to a consumer. ATCP 127.01(21)(b) (b) An employee or agent of a seller. ATCP 127.01(21)(c) (c) A person who makes solicitations under arrangement with a seller. ATCP 127.01 Note Note: For example, a telemarketing firm that makes telephone solicitations on behalf of a "seller" is also a "seller" for purposes of this chapter. Individual employees of the telemarketing firm are also "sellers," for purposes of this chapter, when making telephone solicitations to consumers. ATCP 127.01(22) (22) "Solicitation" means a communication received by a consumer at a place other than the seller's regular place of business, in which a seller offers or promotes the sale of consumer goods or services to a consumer, or which is part of a seller's plan or scheme to sell consumer goods or services to a consumer. "Solicitation" does not include any of the following: ATCP 127.01(22)(a) (a) A mass advertisement. ATCP 127.01(22)(b) (b) A telephone, mail or electronic communication initiated by the consumer, unless prompted by the seller's prior solicitation to the consumer. ATCP 127.01 Note Note: Paragraph (b) does not except a face-to-face communication. ATCP 127.01(22)(c) (c) A written communication that invites a consumer to the seller's regular place of business. ATCP 127.01(22)(d) (d) A communication initiated by a consumer at an established public market, unless that communication was prompted by the seller's prior solicitation to the consumer. ATCP 127.01 Note Note: For example, a routine transaction at a farmers market is not a "solicitation" under this chapter, even though it occurs at a place other than the seller's "regular place of business." ATCP 127.01(22)(e) (e) The delivery, to a consumer, of goods or services sold to the consumer in a transaction other than a telephone, mail or face-to-face transaction under this chapter. ATCP 127.01 Note Note: A "solicitation" under sub. (22) is covered by this rule even though it is not the first communication between the seller and the consumer. ATCP 127.01(23) (23) "Written" or "in writing," as applied to a seller's disclosure to a consumer, means legibly printed on paper or another tangible nonelectronic medium that is delivered to the consumer, or legibly printed in an electronic form that the consumer can electronically retrieve, store or print for future reference. ATCP 127.01 History History: Cr. Register, July, 1999, No. 523 , eff. 8-1-99; CR 02-036 : am. (15), Register November 2002 No. 563 , eff. 12-1-02; CR 04-005 : am. (21) (c) Register October 2004 No. 586 , eff. 11-1-04. subch. II of ch. ATCP 127 Subchapter II — Telephone Solicitations ATCP 127.02 ATCP 127.02 Definitions. In this subchapter: ATCP 127.02(1) (1) "Telephone solicitation" means a solicitation, under s. ATCP 127.01 (22) , that a seller makes to a consumer by telephone, videoconferencing, or other interactive electronic voice communications. ATCP 127.02(2) (2) "Telephone transaction" means any of the following: ATCP 127.02(2)(a) (a) A telephone solicitation. ATCP 127.02(2)(b) (b) Purchase contracts and other dealings that result from a telephone solicitation. ATCP 127.02 History History: Cr. Register, July, 1999, No. 523 , eff. 8-1-99. ATCP 127.04 ATCP 127.04 Opening disclosures. ATCP 127.04(1) (1) Disclosures required. A seller making a telephone solicitation shall disclose all of the following to the consumer before asking any questions or making any statements other than an initial greeting: ATCP 127.04(1)(a) (a) The name of the principal seller. ATCP 127.04 Note Note: For example, a telemarketing firm making solicitations on behalf of another company must disclose the name of the company for which it is acting as agent. The telemarketing firm may also disclose its own identity, but is not required to do so. ATCP 127.04(1)(b) (b) The name of the individual making the telephone solicitation. Down Down /code/admin_code/atcp/127 true administrativecode /code/admin_code/atcp/127/I administrativecode/subch. I of ch. ATCP 127 administrativecode/subch. I of ch. ATCP 127?
http://docs.legis.wisconsin.gov/code/admin_code/atcp/127/I
2013-05-18T18:44:52
CC-MAIN-2013-20
1368696382705
[]
docs.legis.wisconsin.gov
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up UWS 18.08(6) (6) Physical security compliance. UWS 18.08(6)(a) .) . UWS 18.08(9)(b) .).) . UWS 18.09(2)(b) .) .10(4)(b) .) . UWS 18.11(1)(b) .. Down Down /code/admin_code/uws/18 true administrativecode /code/admin_code/uws/18/09/1/c administrativecode/UWS 18.09(1)(c) administrativecode/UWS 18.09(1)?
http://docs.legis.wisconsin.gov/code/admin_code/uws/18/09/1/c
2013-05-18T18:44:02
CC-MAIN-2013-20
1368696382705
[]
docs.legis.wisconsin.gov
Here are some places you can locate others who want to help. During setup or testing, you may have questions about how to do something, or end up in a situation where you can't seem to get a feature to work correctly. One place to look for help is the Answers section on Launchpad. Launchpad is the "home" for the project code and its developers and thus is a natural place to ask about the project. When visiting the Answers section, it is usually good to at least scan over recently asked questions to see if your question has already been answered. If that is not the case, then proceed to adding a new question. Be sure you give a clear, concise summary in the title and provide as much detail as possible in the description. Paste in your command output or stack traces, link to screenshots, and so on. The Launchpad Answers areas are available here - OpenStack Compute: OpenStack Object Storage:. Posting your question or scenario to the OpenStack mailing list is a great way to get answers and insights. You can learn from and help others who may have the same scenario as you. Go to and click "Subscribe to mailing list" or view the archives at. The OpenStack wiki contains content on a broad range of topics, but some of it sits a bit below the surface. Fortunately, the wiki search feature is very powerful in that it can do both searches by title and by content. If you are searching for specific information, say about "networking" or "api" for nova, you can find lots of content using the search feature. More is being added all the time, so be sure to check back often. You can find the search box in the upper right hand corner of any OpenStack wiki page. So you think you've found a bug. That's great! Seriously, it is. The OpenStack community values your setup and testing efforts and wants your feedback. To log a bug you must have a Launchpad account, so sign up at if you do not already have a Launchpad ID. You can view existing bugs and report your bug in the Launchpad Bugs area. It is suggested that you first use the search facility to see if the bug you found has already been reported (or even better, already fixed). If it still seems like your bug is new or unreported then it is time to fill out a bug report. Some tips: Give a clear, concise summary! Provide as much detail as possible in the description. Paste in your command output or stack traces, link to screenshots, etc. Be sure to include what version of the software you are using. This is especially critical if you are using a development branch eg. "Austin release" vs lp:nova rev.396. Any deployment specific info is helpful as well. eg. Ubuntu 10.04, multi-node install. The Launchpad Bugs areas are available here - OpenStack Compute: OpenStack Object Storage: The OpenStack community lives and breathes in the #openstack IRC channel on the Freenode network. You can come by to hang out, ask questions, or get immediate feedback for urgent and pressing issues. To get into the IRC channel you need to install an IRC client or use a browser-based client by going to. You can also use Colloquy (Mac OS X,) or mIRC (Windows,) or XChat (Linux). When you are in the IRC channel and want to share code or command output, the generally accepted method is to use a Paste Bin, the OpenStack project has one at. Just paste your longer amounts of text or logs in the web form and you get a URL you can then paste into the channel. The OpenStack IRC channel is: #openstack on irc.freenode.net.
http://docs.openstack.org/diablo/openstack-object-storage/admin/content/community-support.html
2013-05-18T18:24:00
CC-MAIN-2013-20
1368696382705
[]
docs.openstack.org
None, mboxMessage is used as the default message representation. If create is True, the mailbox is created if it does not exist. The mbox format is the classic format for storing mail on Unix systems. All messages in an mbox mailbox are stored in a single file with the beginning of each message indicated by a line whose first five characters are "From ". Several variations of the mbox format exist to address perceived shortcomings in the original. In the interest of compatibility, mbox implements the original format, which is sometimes referred to as mboxo. This means that the Content-Length: header, if present, is ignored and that any occurrences of "From " at the beginning of a line in a message body are transformed to ">From " when storing the message, although occurences of ">From " are not transformed to "From " when reading the message. Some Mailbox methods implemented by mbox deserve special remarks: See Also: See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.5.1/lib/mailbox-mbox.html
2013-05-18T18:34:02
CC-MAIN-2013-20
1368696382705
[]
docs.python.org
Conflict Detection When conflict detection is enabled, Cisco Defense Orchestrator (Defense Orchestrator) polls the device it manages every 10 minutes to determine if a change has been made to the device's configuration outside of Defense Orchestrator. If Defense Orchestrator detects that a change was made, it marks the configuration status for the device Conflict Detected. Changes made to a device outside of Defense Orchestrator are called "out-of-band" changes. Note: When the FirePOWER policy is managed by Defense Orchestrator, users cannot make out-of-band changes. Any changes made are overridden by Defense Orchestrator. Enable Conflict Detection Enabling conflict detection alerts you to instances where changes have been made to a device outside of Defense Orchestrator. - Open the Devices & Services page. - Select the device or devices for which you want to enable conflict detection. - In the Conflict Detection box at the right of the device table, select Enabled from the list. Related Topics
https://docs.defenseorchestrator.com/Welcome_to_Cisco_Defense_Orchestrator/Getting_Started_with_Cisco_Defense_Orchestrator/Synchronizing_Configurations_Between_Defense_Orchestrator_and_Device/0010_Conflict_Detection
2019-03-19T00:16:40
CC-MAIN-2019-13
1552912201812.2
[array(['https://docs.defenseorchestrator.com/@api/deki/files/518/auto-accept-menu.png?revision=2', 'auto-accept-menu.png'], dtype=object) ]
docs.defenseorchestrator.com
JavaScript Tests for Joomla4 From Joomla! Documentation Contents Introduction For Running JavaScript Tests for the Joomla 3.x please see Running JavaScript Tests for the Joomla CMS. Joomla! 4.x core currently has some custom written JavaScript libraries used in performing various tasks. This documentation is about the environment setup used in Joomla! 4.x for testing those JavaScript libraries and how to write new tests. You can find the current version of the tests on Github.com in the repo. When you checkout the 4.0-dev. $ ls acceptance.suite.yml Gemfile package.json administrator htaccess.txt phpunit.xml.dist appveyor-phpunit.xml images plugins build includes README.md build.js index.php README.txt build.xml installation RoboFile.dist.ini cache Jenkinsfile RoboFile.php cli jenkins-phpunit.xml robots.txt.dist codeception.yml karma.conf.js scss-lint.yml components language templates composer.json layouts -> test composer.lock libraries tmp configuration.php LICENSE.txt travisci-phpunit.xml dev media web.config.txt drone-package.json modules $. This will give you a full setup that might be different from the automated testing setup. For automated testing we are using the drone-package.json. If you are rename package.json to package.json.save and drone-package.json to package.json before you run npm install you will get a lighter setup and you will using the versions we are using in our automated testing setup. Starting the Karma server and running the tests Execute the commandnpm run test This starts the web server and automatically opens the browser Firefox. Then the tests would be run and the detailed results will be shown in the command line itself. [33m05 02 2018 13:10:01.666:WARN [watcher]: [39mAll files matched by "/var/www/html/JOOMLA/joomla4/joomla-cms/media/system/js/core.js" were excluded or matched by prior matchers. [32m05 02 2018 13:10:02.009:INFO [karma]: [39mKarma v2.0.0 server started at [32m05 02 2018 13:10:02.010:INFO [launcher]: [39mLaunching browser Firefox with unlimited concurrency [32m05 02 2018 13:10:02.015:INFO [launcher]: [39mStarting browser Firefox [32m05 02 2018 13:10:04.159:INFO [Firefox 58.0.0 (Ubuntu 0.0.0)]: [39mConnected on socket 0l2b3VhoFFUVyWitAAAA with id 91983742 [33m05 02 2018 13:10:04.493:WARN [web-server]: [39m404: /uri [32m05 02 2018 13:10:04.556:INFO [Firefox 58.0.0 (Ubuntu 0.0.0)]: [39mStarting tests 91983742 Firefox 58.0.0 (Ubuntu 0.0.0): Executed 0 of 125 SUCCESS (0 secs / 0 secs) ... Browser results: - Firefox 58.0.0 (Ubuntu 0.0.0): 125 tests - 125 ok What the above command does is execute node node_modules/karma/bin/karma start karma.conf.js --single-run in the background. You can check this specification in the script section of the file package.json The magic inside The karma.conf.js file mentioned in the command for starting the Karma server and that you can find in the root of Joomlaǃ 4.x, is the configuration file for the Karma server. Below you can see the content of this file. // Karma configuration module.exports = function (config) { config.set({ // base path that will be used to resolve all patterns (eg. files, exclude) basePath: '', // frameworks to use // available frameworks: frameworks: ['jasmine-ajax', 'jasmine', 'requirejs'], // list of files / patterns to load in the browser files: [ {pattern: 'media/system/js/polyfills/webcomponents/webcomponents-ce.min.js', included: true, served: true, watched: true}, {pattern: 'node_modules/jquery/dist/jquery.min.js', included: false}, {pattern: 'node_modules/jasmine-jquery/lib/jasmine-jquery.js', included: false}, {pattern: 'node_modules/text/text.js', included: false}, {pattern: 'media/vendor/bootstrap/js/bootstrap.min.js', included: false}, {pattern: 'media/vendor/jquery-ui/js/jquery.ui.core.min.js', included: false}, {pattern: 'media/vendor/jquery-ui/js/jquery.ui.sortable.min.js', included: false}, {pattern: 'media/system/js/*.js', included: false}, {pattern: 'media/system/js/core.js', included: false,served: true, watched: true}, {pattern: 'media/system/js/legacy/*.js', included: false}, {pattern: 'media/system/js/fields/*.js', included: false}, {pattern: 'media/vendor/joomla-custom-elements/js/joomla-alert.min.js', included: false, served: true, watched: true}, {pattern: 'media/system/js/fields/calendar-locales/*.js', included: false}, {pattern: 'media/system/js/fields/calendar-locales/date/gregorian/*.js', included: false}, {pattern: 'tests/javascript/**/fixture.html', included: false}, {pattern: 'tests/javascript/**/spec.js', included: false}, {pattern: 'tests/javascript/**/spec-setup.js', included: false}, {pattern: 'media/system/webcomponents/js/*.js', included: false}, {pattern: 'images/*.png', included: false}, 'tests/javascript/test-main.js' ], exclude: [ 'media/system/webcomponents/js/*-es5.js', 'media/system/webcomponents/js/*.min.js', 'media/system/webcomponents/js/*-es5.min.js', ], // preprocess matching files before serving them to the browser // available preprocessors: preprocessors: { '**/system/js/*.js': ['coverage'] }, // coverage reporter configuration coverageReporter: { type : 'html', dir : 'build/coverage-js/' }, // test results reporter to use // possible values: 'dots', 'progress' // available reporters: reporters: ['verbose', : ['Firefox'], // Continuous Integration mode // if true, Karma captures browsers, runs the tests and exits singleRun: false, // list of plugins plugins: [ 'karma-jasmine', 'karma-jasmine-ajax', 'karma-firefox-launcher', 'karma-coverage', 'karma-requirejs', 'karma-verbose-reporter' ], // Concurrency level // how many browser should be started simultaneous concurrency: Infinity }); };. If singleRun is set to true, Karma will start and capture all configured browsers, run tests and then exit with an exit code of 0 or 1 depending on whether all tests passed or any tests failed.singleRun a - as we do. node node_modules/karma/bin/karma start karma.conf.js --single-run The next important file we need to look at is the test-main.js file which is an auto generated file from require.js. We use require.js in this setup to make dynamic loading of dependencies in test specs possible. You can find this file in the folder /test/javascript/. Below you can see the content of this file. const allTestFiles = []; const TEST_REGEXP = /(spec|test)\.js$/i; // Get a list of all the test files to include Object.keys(window.__karma__.files).forEach((file) => { if (TEST_REGEXP.test(file)) { // Normalize paths to RequireJS module names. // If you require sub-dependencies of test files to be loaded as-is (requiring file extension) // then do not normalize the paths const normalizedTestModule = file.replace(/^\/base\/|\.js$/g, ''); allTestFiles.push(normalizedTestModule); } }); require.config({ // Karma serves files under /base, which is the basePath from your config file baseUrl: '/base', paths: { 'core': 'media/system/js/core.min', 'jquery': 'node_modules/jquery/dist/jquery.min', 'jui': 'media/vendor/jquery-ui/js/jquery.ui.core.min', 'jui-sortable': 'media/vendor/jquery-ui/js/jquery.ui.sortable.min', 'bootstrap': 'media/vendor/bootstrap/js/bootstrap.min', 'jasmineJquery': 'node_modules/jasmine-jquery/lib/jasmine-jquery', 'libs': 'media/system/js', 'legacy_libs': 'media/system/js/legacy', 'testsRoot': 'tests/javascript', 'text': 'node_modules/text/text', 'fields': 'media/system/js/fields', 'calLang': 'media/system/js/fields/calendar-locales/en', 'calDate': 'media/system/js/fields/calendar-locales/date/gregorian/date-helper', 'JCE': 'media/system/webcomponents/js' }, shim: { jasmineJquery: ['jquery'], bootstrap: ['jquery'], 'jui-sortable': ['jquery'], 'libs/validate': { deps: [] }, 'libs/subform-repeatable': { deps: ['jquery', 'jui', 'jui-sortable'] }, 'JCE/joomla-field-send-test-mail': { deps: ['jquery'] }, 'libs/fields/calendar': { deps: ['calLang', 'calDate'] } }, // dynamically load all test files deps: allTestFiles, // we have to kickoff jasmine, as it is asynchronous callback: window.__karma__.start }); Two important changes are done to the auto generated file. - The first is the paths attribute set in in the require configuration. These paths allow us to have aliases assigned to specific JavaScript files. This way for example, whenever we need to add "node_modules/jquery/dist/jquery.min". -tests --javascript ---fixtures ----fixture.html ---spec-setup.js ---spec.js. You can find this plugin in the folder /node-modules For more information see. Code example On the end of the article Running JavaScript Tests for the Joomla CMS you can find some Code templates. Here you see a code example. spec.js define(['jquery', 'testsRoot/calendar/spec-setup', 'jasmineJquery'], function ($) { beforeAll(function () { var element = document.querySelector(".field-calendar"), input = document.getElementById('jform_created'), currentDate = new Date(); input.value = currentDate.getFullYear() + '-09-01 05:17:00'; input.setAttribute('data-alt-value', currentDate.getFullYear() + '-09-01 05:17:00'); JoomlaCalendar.init(element); }); describe('Calendar set for the input element', function () { it('Should have calendar element under the input element', function () { expect($('body')).toContainElement('.js-calendar'); }); it('Calendar should be hidden', function () { expect($('.js-calendar').css('display')).toEqual('none'); }); it('Should appear on button click', function (done) { $('#jform_created_btn').trigger('click'); setTimeout(function() { expect($('.js-calendar').css('display')).toEqual('block'); done(); }, 200) }); }); }); spec-setup.js define(['jquery', 'text!testsRoot/calendar/fixtures/fixture.html', 'libs/fields/calendar'], function ($, fixture) { $('body').append(fixture); }); fixtures/fixture.html <div id="calendarjs"> <form name="formTest" onsubmit="return false"> <div class="field-calendar"> <div class="input-append"> <input type="text" id="jform_created" name="jform[created]" value="" size="22" placeholder="Created date." data- <button type="button" class="btn btn-secondary" id="jform_created_btn" data- <span class="icon-calendar"></span> </button> </div> </div> <input type="text" name="none-spec" id="cal-close-btn" value="" title="nonenne"/> </form> </div>
https://docs.joomla.org/JavaScript_Tests_for_Joomla4/en
2019-03-19T00:36:33
CC-MAIN-2019-13
1552912201812.2
[]
docs.joomla.org
Getting data using REST The REST service allows you to retrieve page and object data from Kentico instances. Send requests using the GET HTTP method to a URL in format: <REST base URL>/<resource path> The base URL of the Kentico REST service is <site domain name>/rest. For example, if your site is running at, use as the base URL of the service. To learn about the available resource paths for pages and objects, see the tables in the sections below. The requests return data in either XML, JSON, RSS or Atom format (see Examples of data retrieved via the REST service for more information). REST base URL Sending a GET request to the base URL of the REST service exposes the service document (for ODATA browsing). The document contains a list of all available primary object types (without child and binding object types) and the URLs under which the objects can be accessed. See also: ODATA service documents Character encoding Data retrieval requests return the results in the server's default character encoding. To get the results in a different encoding type, set the encoding in the Accept-Charset field of the GET request's HTTP header. If the specified encoding is not available, the system uses the Default encoding configured in Settings -> Integration -> Rest. GET HTTP/1.1 Authorization: Basic UmVzdENsaWVudDpNeVBhc3N3b3Jk Accept-Charset: utf-8 Content-Type: text\xml Getting page data To load the data of pages from Kentico websites, send GET requests to the appropriate URL – append the resource paths described below to the base URL of your REST service. Culture constants You can use the following constants instead of culture codes in page REST calls: - defaultculture - the page version in the site's default culture - allcultures - page versions in all available cultures Example: /content/currentsite/defaultculture/document/company/careers Getting object data To load the data of objects from Kentico, send GET requests to the appropriate URL – append the resource paths described below to the base URL of your REST service. Most object resources start with an object type value. To find the value for specific object types, open the System application in the Kentico administration interface and select the Object types tab. Data loading parameters When loading data via REST, you can append the following query string parameters to the request URL: Was this page helpful?
https://docs.kentico.com/k10/integrating-3rd-party-systems/kentico-rest-service/getting-data-using-rest
2019-03-19T00:34:05
CC-MAIN-2019-13
1552912201812.2
[]
docs.kentico.com
If you have files that you want to integrate into the MiKTeX setup, you have several options: --include-directory= dir For example: latex --include-directory=C:\path\to\my\style\files thesis.tex See the section called “Specifying Additional Input Directories”, for more information. For example: set TEXINPUTS=C:\path\to\my\style\files latex thesis.tex See Chapter 8, Environment variables, to learn more about MiKTeX environment variables. Register the root of the directory tree which contains your files. The directory tree must conform to the TDS standard, i.e., you must imitate the directory tree in the MiKTeX installation directory (usually C:\Program Files\MiKTeX 2.9).
https://docs.miktex.org/2.9/manual/localadditions.html
2019-03-19T00:32:27
CC-MAIN-2019-13
1552912201812.2
[]
docs.miktex.org
8.3. The syslog says "serial line is not 8 bit clean" There are variations on this too - such as serial line looped back etc., and the cause can be one (or a sequence) of a number of things. To understand what is going on here, it is necessary to grasp a bit of what is going on behind the scenes in pppd itself. When pppd starts up, it sends LCP (link control protocol) packets to the remote machine. If it receives a valid response it then goes on to the next stage (using IPCP - IP control protocol packets) and only when this negotiation completes is the actual IP layer started so that you can use the PPP link. If there is no ppp server operating at the remote end when your PC sends lcp packets, these get reflected by the login process at the far end. As these packets use 8 bits, reflecting them strips the 8th bit (remember, ASCII is a 7 bit code). PPP sees this and complains accordingly. There are several reasons this reflection can occur. 8.3.1. You are not correctly logging into the server When your chat script completes, pppd starts on your PC. However, if you have not completed the log in process to the server (including sending any command required to start PPP on the server), PPP will not start. So, the lcp packets are reflected and you receive this error. You need to carefully check and correct (if necessary) your chat script (see above). 8.3.2. You are not starting PPP on the server Some PPP servers require you to enter a command and/or a RETURN after completing the log in process before the remote end starts ppp. Check your chat script (see above). If you log in manually and find you need to send a RETURN after this to start PPP, simply add a blank expect/send pair to the end of your chat script (an empty send string actually sends a RETURN). 8.3.3. The remote PPP process is slow to start This one is a bit tricksy! By default, your Linux pppd is compiled to send a maximum of 10 lcp configuration requests. If the server is a bit slow to start up, all 10 such requests can be sent before the remote PPP is ready to receive them. On your machine, pppd sees all 10 requests reflected back (with the 8th bit stripped) and exits. There are two ways round this:- Add "lcp-max-configure 30" to your ppp options. This increases the maximum number of lcp configure packets pppd sends before giving up. For really slow server, you may need even more than this. Alternatively, you can get a bit tricksy in return. You may have noticed that when you logged in by hand to the PPP server and PPP started there, the first character of the ppp garbage that appears was always the tilde character (˜). Using this knowledge we can add a new expect/send pair to the end of the chat script which expects a tilde and sends nothing. This would look like:- Note: as the tilde character has a special meaning in the shell, it must be escaped (and hence the leading backslash).
http://tldp.docs.sk/howto/linux-ppp/x419.html
2019-03-19T00:22:24
CC-MAIN-2019-13
1552912201812.2
[]
tldp.docs.sk
Creating a Package Note This topic is intended to be a reference for the process of creating a package. For a focused walkthrough example, refer to the Create and Publish a Package Quickstart. Additionally, this topic applies to all project types other than .NET Core projects using Visual Studio 2017 and NuGet 4.0+. In these cases, NuGet uses information from a .csproj file directly. These details are explained in Create .NET Standard Packages with Visual Studio 2017 and NuGet pack and restore as MSBuild targets.. Note. editing a file from another project. You can also have NuGet create a template manifest for your by using the following command: nuget spec <package_name> metadata. Solution-level packages (NuGet 2.x only) root. -. For the <version> value: -,> project.json: Indicate the package type within a packOptions.packageTypeproperty json: { // ... "packOptions": { "packageType": "DotnetCliTool" } }>: Note If you include an empty <files> node in the .nuspec file, NuGet will> <!-- ... --> <. Package Explorer directory,: - Handling dependencies - Supporting multiple target frameworks - Transformations of source and configuration files - Localization - Pre-release versions Finally, there are additional package types to be aware of:
https://docs.microsoft.com/en-us/nuget/create-packages/creating-a-package
2017-01-16T12:47:37
CC-MAIN-2017-04
1484560279176.20
[array(['media/create_01-showreadme.png', 'The display of a readme file for a NuGet package upon installation'], dtype=object) ]
docs.microsoft.com
.spi; 25 26 import java.util.Set; 27 import javax.jcr.RepositoryException; 28 import javax.jcr.Session; 29 import javax.servlet.ServletContext; 30 import javax.servlet.http.HttpServletRequest; 31 32 /** 33 * Interface for any class that provides access to one or more local JCR repositories. 34 * <p> 35 * Repository providers must provide a public, no-argument constructor and be thread-safe. 36 * </p> 37 */ 38 public interface RepositoryProvider { 39 40 /** 41 * Returns an active session for the given workspace name in the named repository. 42 * <p> 43 * JCR implementations that do not support multiple repositories on the same server can ignore the repositoryName parameter. 44 * </p> 45 * 46 * @param request the servlet request; may not be null or unauthenticated 47 * @param repositoryName the name of the repository in which the session is created 48 * @param workspaceName the name of the workspace to which the session should be connected 49 * @return an active session with the given workspace in the named repository 50 * @throws RepositoryException if any other error occurs 51 */ 52 public Session getSession( HttpServletRequest request, 53 String repositoryName, 54 String workspaceName ) throws RepositoryException; 55 56 /** 57 * Returns the available repository names 58 * <p> 59 * JCR implementations that do not support multiple repositories on the same server should provide a singleton set containing 60 * some default repository name. 61 * </p> 62 * 63 * @return the available repository names; may not be null or empty 64 */ 65 Set<String> getJcrRepositoryNames(); 66 67 /** 68 * Signals the repository provider that it should initialize itself based on the provided {@link ServletContext servlet 69 * context} and begin accepting connections. 70 * 71 * @param context the servlet context for the REST servlet 72 */ 73 void startup( ServletContext context ); 74 75 /** 76 * Signals the repository provider that it should complete any pending transactions, shutdown, and release any external 77 * resource held. 78 */ 79 void shutdown(); 80 81 }
http://docs.jboss.org/modeshape/1.2.0.Final/xref/org/modeshape/web/jcr/spi/RepositoryProvider.html
2017-01-16T14:12:57
CC-MAIN-2017-04
1484560279176.20
[]
docs.jboss.org
Automate User Provisioning and Deprovisioning to SaaS Applications with Azure Active Directory What is Automated User Provisioning for SaaS Apps? Azure Active Directory (Azure AD) allows you to automate the creation, maintenance, and removal of user identities in cloud (SaaS) applications such as Dropbox, Salesforce, ServiceNow, and more. Below are some examples of what this feature allows you to do: - Automatically create new accounts in the right SaaS apps for new people when they join your team. - Automatically deactivate accounts from SaaS apps when people inevitably leave the team. - Ensure that the identities in your SaaS apps are kept up to date based on changes in the directory. - Provision non-user objects, such as groups, to SaaS apps that support them. Automated user provisioning also includes the following functionality: - The ability to match existing identities between Azure AD and SaaS apps. - Customization options to help Azure AD fit the current configurations of the SaaS apps that your organization is currently using. - Optional email alerts for provisioning errors. - Reporting and activity logs to help with monitoring and troubleshooting. Why Use Automated Provisioning? Some common motivations for using this feature include: - To avoid the costs, inefficiencies, and human error associated with manual provisioning processes. - To secure your organization by instantly removing users' identities from key SaaS apps when they leave the organization. - To easily import a bulk number of users into a particular SaaS application. - To enjoy the convenience of having your provisioning solution run off of the same app access policies that you defined for Azure AD Single Sign-On. Frequently Asked Questions How frequently does Azure AD write directory changes to the SaaS app? Azure AD checks for changes every five to ten minutes. If the SaaS app is returning several errors (such as in the case of invalid admin credentials), then Azure AD will gradually slow its frequency to up to once per day until the errors are fixed. How long will it take to provision my users? Incremental changes happen nearly instantly but if you are trying to provision most of your directory, then it depends on the number of users and groups that you have. Small directories take only a few minutes, medium-sized directories may take several minutes, and very large directories may take several hours. How can I track the progress of the current provisioning job? You can review the Account Provisioning Report under the Reports section of your directory. Another option is to visit the Dashboard tab for the SaaS application that you are provisioning to, and look under the "Integration Status" section near the bottom of the page. How will I know if users fail to get provisioned properly? At the end of the provisioning configuration wizard there is an option to subscribe to email notifications for provisioning failures. You can also check the Provisioning Errors Report to see which users failed to be provisioned and why. Can Azure AD write changes from the SaaS app back to the directory? For most SaaS apps, provisioning is outbound-only, which means that users are written from the directory to the application, and changes from the application cannot be written back to the directory. For Workday, however, provisioning is inbound-only, which means that that users are imported into the directory from Workday, and likewise, changes in the directory do not get written back into Workday. How can I submit feedback to the engineering team? Please contact us through the Azure Active Directory feedback forum. How Does Automated Provisioning Work? Azure AD provisions users to SaaS apps by connecting to provisioning endpoints provided by each application vendor. These endpoints allow Azure AD to programmatically create, update, and remove users. Below is a brief overview of the different steps that Azure AD takes to automate provisioning. - When you enable provisioning for an application for the first time, the following actions are performed: - Azure AD will attempt to match any existing users in the SaaS app with their corresponding identities in the directory. When a user is matched, they are not automatically enabled for single sign-on. In order for a user to have access to the application, they must be explicitly assigned to the app in Azure AD, either directly or via group membership. - If you have already specified which users should be assigned to the application, and if Azure AD fails to find existing accounts for those users, Azure AD will provision new accounts for them in the application. - Once the initial synchronization has been completed as described above, Azure AD will check every 10 minutes for the following changes: - If new users have been assigned to the application (either directly or through group membership), then they will be provisioned a new account in the SaaS app. - If a user's access has been removed, then their account in the SaaS app will be marked as disabled (users are never fully deleted, which protects you from data loss in the event of a misconfiguration). - If a user was recently assigned to the application and they already had an account in the SaaS app, that account will be marked as enabled, and certain user properties may be updated if they are out-of-date compared to the directory. - If a user's information (such as phone number, office location, etc) has been changed in the directory, then that information will also be updated in the SaaS application. For more information on how attributes are mapped between Azure AD and your SaaS app, see the article on Customizing Attribute Mappings. List of Apps that Support Automated User Provisioning Click on an app to see a tutorial on how to configure automated provisioning for it: - Box - Citrix GoToMeeting - Concur - Docusign - Dropbox for Business - Google Apps - Jive - Salesforce - Salesforce Sandbox - ServiceNow - Workday (inbound provisioning) In order for an application to support automated user provisioning, it must first provide the necessary endpoints that allow for external programs to automate the creation, maintenance, and removal of users. Therefore, not all SaaS apps are compatible with this feature. For apps that do support this, the Azure AD engineering team will then be able to build a provisioning connector to those apps, and this work is prioritized by the needs of current and prospective customers. To contact the Azure AD engineering team to request provisioning support for additional applications, please submit a message through the Azure Active Directory feedback forum. Related Articles - Article Index for Application Management in Azure Active Directory - Customizing Attribute Mappings for User Provisioning - Writing Expressions for Attribute Mappings - Scoping Filters for User Provisioning - Using SCIM to enable automatic provisioning of users and groups from Azure Active Directory to applications - Account Provisioning Notifications - List of Tutorials on How to Integrate SaaS Apps
https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-app-provisioning
2017-01-16T12:56:15
CC-MAIN-2017-04
1484560279176.20
[]
docs.microsoft.com
(), only destroy methods will be called. Should not throw an exception if the component isn't started yet. In the case of a container, this will propagate the stop signal to all components that apply. SmartLifecycle.stop(Runnable), DisposableBean.destroy() boolean isRunning() In the case of a container, this will return true only if all components that apply are currently running.
http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/Lifecycle.html
2017-01-16T12:52:24
CC-MAIN-2017-04
1484560279176.20
[]
docs.spring.io
JDatabase::getLog This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete. JDatabase::get />
https://docs.joomla.org/API17:JDatabase::getLog
2017-01-16T12:57:55
CC-MAIN-2017-04
1484560279176.20
[]
docs.joomla.org
Help Center Local Navigation Creating a grid layout You can position fields in rows and columns on a screen to create a grid by using the GridFieldManager class. When you create a grid, you can specify the number of rows and columns. After you create a grid, you cannot change the number of rows and columns that it contains. Grids are zero-indexed, so the first cell is located at row 0, column 0. In a locale with a left-to-right text direction, the first cell is in the upper-left corner of the grid. In a locale with a right-to-left text direction, the first cell is in the upper-right corner of the grid. You can add fields to a grid sequentially (left-to right, top-to-bottom in locales with a left-to-right text direction; right-to-left, top-to-bottom in locales with a right-to-left text direction) or by specifying a row and column in the grid. You can delete fields, insert fields, specify the spacing between columns and rows, and retrieve a grid's properties. Grids do not have defined heading rows or heading columns. You can emulate the appearance of headings by changing the appearance of the fields in the grid's first row or first column. Grids can scroll horizontally or vertically if the grid's width or height exceeds the screen's visible area. You can specify column width by invoking GridFieldManager.setColumnProperty(), and you can specify row height by invoking GridFieldManager.setRowProperty(). When you invoke these methods, you must specify a GridFieldManager property. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/17971/Creating_grid_layout_877556_11.jsp
2014-03-07T09:36:01
CC-MAIN-2014-10
1393999640501
[array(['GFM-left-to-right_897231_11.jpg', 'This screen shot shows a grid in a left-to-right locale.'], dtype=object) array(['GFM-right-to-left_897235_11.jpg', 'This screen shot shows a grid in a right-to-left locale.'], dtype=object) ]
docs.blackberry.com
Message-ID: <1489009296.21185.1394184926800.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21184_785638036.1394184926799" ------=_Part_21184_785638036.1394184926799 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Note: If you are not familiar with the generator concept, this <= a href=3D"/display/BOO/Generator+Breakdown">Generator Breakdown page sh= ould be read before continuning.=20 Generator expressions are defined through the pattern:=20 <expression> for <declarations> in <iterator> [if|unless = <condition>]=20=20 Generator expressions can be used as return values:=20 def GetCompletedTasks(): return t for t in _tasks if t.IsCompleted=20 Generator expressions can be stored in variables:=20 oddNumbers =3D i for i in range(10) if i % 2=20 Generator expressions can be used as arguments to functions:=20 print(join(i*2 for i in range(10) if 0 =3D=3D i % 2))=20 In all cases the evaluation of each inner expression happens only Generator expressions capture their enclosing environment (like closures= do) and thus are able to affect it by updating variables:=20 i =3D 0 a =3D (++i)*j for j in range(3) print(join(a)) # prints "0 2 6" print(i) # prints 3=20 As well as being affected by changes to captured variables:=20 i =3D 1 a =3D range(3) generator =3D i*j for j in a print(join(generator)) # prints "0 1 2" i =3D 2 a =3D range(5) print(join(generator)) # prints "0 2 4 6 8"=20 boo's variable capturing behavior differs in behavior from python's in a= subtle but I think good way:=20 import System functions =3D Math.Sin, Math.Cos a =3D [] for f in functions: a.Add(f(value) for value in range(3)) for iterator in a: print(join(iterator))=20 This program properly prints the sins followed by the cosines of 0, 1, 2= because for controlled variable references (such as f) in= boo generator expressions as well as in closures are bound early= strong>.=20 If you don't know what the python behavior would be check this document.
http://docs.codehaus.org/exportword?pageId=5935
2014-03-07T09:35:26
CC-MAIN-2014-10
1393999640501
[]
docs.codehaus.org
Help Center Local Navigation Set the level of detail that the log console displays - In the BlackBerry® Infrastructure and Smartphone Simulator, click Options. - In the GUI Console Log Level drop-down list, click the setting that you want the BlackBerry Infrastructure and Smartphone Simulator to display in the log console. The descriptions of each level are provided in the drop-down list. Next topic: Set the level of detail of the debug log file Previous topic: Overview: log console Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/16558/Change_logging_information_level_613663_11.jsp
2014-03-07T09:39:15
CC-MAIN-2014-10
1393999640501
[]
docs.blackberry.com
The Groovy Eclipse Plugin allows you to edit, compile and run groovy scripts and classes. Note that the plugin is work in progress. You can check the current status of the plugin here: issues and bugs Eclipse version compatibility Eclipse 3.0 : not working; there are dependencies to the 3.1 AP Eclipse 3.1 : not working with version 1.0.1 of the plugin. Works with version 1.0.0.20070118 Eclipse 3.2 : working Eclipse 3.3 : working Eclipse 3.4 : working (initial simple testing seems to work, more thorough testing still required) Quick Start For the installation of the Groovy Eclipse Plugin follow this PDF presentation. The "Hello World" screen cast shows you how to write a Groovy "Hello World" application using Eclipse. Installation There are two ways to install the plugin, one is the preferred way through the Update Site and second is manual installation from a zip archive. Update Site The update site containing the most recent release is here: - Go to: Help -> Software Updates -> Find and Install -> Search For New Features - Click on New Remote Site - Enter a name (eg: Groovy) in the Name field - Copy the URL from above into the URL field and press OK - Check the new Groovy repository and press finish - Under Select the Features to Install check the Groovy check box (be sure to get the latest version) and press Next - Accept the agreement and press Next - If the default location looks okay press Finish (this will download the plugin) - If you get a warning that the plugin is unsigned click Install or Install All This should download and install the Groovy plugin. It may require a restart of Eclipse to make sure it is loaded okay. If you're interested in trying the latest development version, the update site is: The head version of groovy-eclipse is builded on a CI Server at the HSR (University of Applied Sciences Rapperswil). The update site is: From Zip Archive It is possible to access a zip archive that can be expanded into your eclipse installation from:. Create a Groovy Project To create a basic Groovy project in Eclipse perform the following steps: - Go to: File -> New -> Project - Select Java Project and press Next - In the Project Name field enter the name of your project (GroovyJava for this example) - Under Project Layout select Create separate source and output folders and press Finish - In the Package Explorer find the newly created project, right click, and select Groovy -> Add Groovy Nature. Download and build from Subversion This section is for those who want to do development work on the Eclipse plugin. More specific information regarding the wish-list and standards can be found at Eclipse Plugin Development. You may also want to join the groovy-eclipse-dev mailing list by subscribing here -.): under the "GRECLIPSE" category
http://docs.codehaus.org/pages/viewpage.action?pageId=101646548
2014-03-07T09:38:53
CC-MAIN-2014-10
1393999640501
[]
docs.codehaus.org
When you work with Spot Instances, there are two kinds of limits to consider: bid price limits and request limits. A Spot Bid Price limit is the maximum price you can bid when you make a request for Spot Instances. A Spot Request limit is the number of instances per region that you can request. Starting on December 20, 2013, Amazon EC2 introduced a default limit on the maximum amount that you can bid for a Spot Instance. The default bid price limit is designed to protect Spot Instance customers from incurring unexpected charges due to high bid prices. The limit also aims to reduce the likelihood that Spot Prices rise to excessively high levels. The default bid limit will be set to four times the On-Demand price. For example, if you submit a Spot Instance request for a Linux m3.2xlarge instance in us-east-1, for which the On-Demand price is $0.900 per hour, you may bid up to $3.600 per hour. Note If you currently have a bid above the new default limit, we recommend that you cancel your existing bid, and start bidding within the limits before December 20, 2013. After that date, Spot Instance requests that exceed your account's bid limit may not be processed. If your application requires you to bid above the default bid price limit, please submit a limit increase request to [email protected]. Your limit increase request must include the following information: Your AWS account number. How you use Spot Instances and manage Spot interruptions. The bid price limit you are requesting (specifying the instance type, region, and product platform). For information about On-Demand instance pricing, see Amazon EC2 Pricing. For many instance types, you are limited to a total of 100 Spot requests per region. Some instance types are not available in all regions, and certain instance types have lower regional limits, as shown in the following table. Keep in mind that even if you are within your Spot request limits, your Spot Instances can be terminated if your bid price no longer exceeds the Spot price, or if there is a Spot capacity shortage. Note New AWS accounts may start with limits that are lower than the limits described here. In addition, if your account has a custom regional Spot request limit, the custom limit overrides all limits described here. The following table lists instance types and their Spot request limits. The default regional Spot request limit is 100. If you need more Spot Instances, complete the Amazon EC2 instance request form with your use case, specify in the Use Case Description that you are requesting an increase to your account's Spot Instance limits, and your instance increase will be considered. Limit increases are tied to the region specified in the Spot Instance request. For information about instance types, see Instance Types.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-limits.html
2014-03-07T09:33:43
CC-MAIN-2014-10
1393999640501
[]
docs.aws.amazon.com
do. Doand Prefix, the values are automatically deleted., do map to CloudFront regions, see Amazon CloudFront Pricing. Type: String Valid Values: PriceClass_100 | PriceClass_200 | PriceClass_All Required: No - Restrictions A complex type that identifies ways in which you want to restrict distribution of your content. Type: Restrictions object Required: No - View:
http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/API_DistributionConfig.html
2017-01-16T17:16:30
CC-MAIN-2017-04
1484560279224.13
[]
docs.aws.amazon.com
Typed Memoryviews¶ Typed memoryviews allow efficient access to memory buffers, such as those underlying NumPy arrays, without incurring any Python overhead. Memoryviews are similar to the current NumPy array buffer support ( np.ndarray[np.float64_t, ndim=2]), but they have more features and cleaner syntax. Memoryviews are more general than the old NumPy array buffer support, because they can handle a wider variety of sources of array data. For example, they can handle C arrays and the Cython array type (Cython arrays). A memoryview can be used in any context (function parameters, module-level, cdef class attribute, etc) and can be obtained from nearly any object that exposes writable buffer through the PEP 3118 buffer interface. Quickstart¶ If you are used to working with NumPy, the following examples should get you started with Cython memory views. from cython.view cimport array as cvarray import numpy as np # Memoryview on a NumPy array narr = np.arange(27, dtype=np.dtype("i")).reshape((3, 3, 3)) cdef int [:, :, :] narr_view = narr # Memoryview on a C array cdef int carr[3][3][3] cdef int [:, :, :] carr_view = carr # Memoryview on a Cython array cyarr = cvarray(shape=(3, 3, 3), itemsize=sizeof(int), format="i") cdef int [:, :, :] cyarr_view = cyarr # Show the sum of all the arrays before altering it print("NumPy sum of the NumPy array before assignments: %s" % narr.sum()) # We can copy the values from one memoryview into another using a single # statement, by either indexing with ... or (NumPy-style) with a colon. carr_view[...] = narr_view cyarr_view[:] = narr_view # NumPy-style syntax for assigning a single value to all elements. narr_view[:, :, :] = 3 # Just to distinguish the arrays carr_view[0, 0, 0] = 100 cyarr_view[0, 0, 0] = 1000 # Assigning into the memoryview on the NumPy array alters the latter print("NumPy sum of NumPy array after assignments: %s" % narr.sum()) # A function using a memoryview does not usually need the GIL cpdef int sum3d(int[:, :, :] arr) nogil: cdef size_t i, j, k cdef int total = 0 I = arr.shape[0] J = arr.shape[1] K = arr.shape[2] for i in range(I): for j in range(J): for k in range(K): total += arr[i, j, k] return total # A function accepting a memoryview knows how to use a NumPy array, # a C array, a Cython array... print("Memoryview sum of NumPy array is %s" % sum3d(narr)) print("Memoryview sum of C array is %s" % sum3d(carr)) print("Memoryview sum of Cython array is %s" % sum3d(cyarr)) # ... and of course, a memoryview. print("Memoryview sum of C memoryview is %s" % sum3d(carr_view)) This code should give the following output: NumPy sum of the NumPy array before assignments: 351 NumPy sum of NumPy array after assignments: 81 Memoryview sum of NumPy array is 81 Memoryview sum of C array is 451 Memoryview sum of Cython array is 1351 Memoryview sum of C memoryview is 451 Using memoryviews¶ Syntax¶ Memory views use Python slicing syntax in a similar way as NumPy. To create a complete view on a one-dimensional int buffer: cdef int[:] view1D = exporting_object A complete 3D view: cdef int[:,:,:] view3D = exporting_object A 2D view that restricts the first dimension of a buffer to 100 rows starting at the second (index 1) and then skips every second (odd) row: cdef int[1:102:2,:] partial_view = exporting_object This also works conveniently as function arguments: def process_3d_buffer(int[1:102:2,:] view not None): ... The not None declaration for the argument automatically rejects None values as input, which would otherwise be allowed. The reason why None is allowed by default is that it is conveniently used for return arguments: def process_buffer(int[:,:] input not None, int[:,:] output = None): if output is None: output = ... # e.g. numpy.empty_like(input) # process 'input' into 'output' return output Cython will reject incompatible buffers automatically, e.g. passing a three dimensional buffer into a function that requires a two dimensional buffer will raise a ValueError. Indexing¶ In Cython, index access on memory views is automatically translated into memory addresses. The following code requests a two-dimensional memory view of C int typed items and indexes into it: cdef int[:,:] buf = exporting_object print(buf[1,2]) Negative indices work as well, counting from the end of the respective dimension: print(buf[-1,-2]) The following function loops over each dimension of a 2D array and adds 1 to each item: def add_one(int[:,:] buf): for x in xrange(buf.shape[0]): for y in xrange(buf.shape[1]): buf[x,y] += 1 Indexing and slicing can be done with or without the GIL. It basically works like NumPy. If indices are specified for every dimension you will get an element of the base type (e.g. int). Otherwise, you will get a new view. An Ellipsis means you get consecutive slices for every unspecified dimension: cdef int[:, :, :] my_view = exporting_object # These are all equivalent my_view[10] my_view[10, :, :] my_view[10, ...] Copying¶ Memory views can be copied in place: cdef int[:, :, :] to_view, from_view ... # copy the elements in from_view to to_view to_view[...] = from_view # or to_view[:] = from_view # or to_view[:, :, :] = from_view They can also be copied with the copy() and copy_fortran() methods; see C and Fortran contiguous copies. Transposing¶ In most cases (see below), the memoryview can be transposed in the same way that NumPy slices can be transposed: cdef int[:, ::1] c_contig = ... cdef int[::1, :] f_contig = c_contig.T This gives a new, transposed, view on the data. Transposing requires that all dimensions of the memoryview have a direct access memory layout (i.e., there are no indirections through pointers). See Specifying more general memory layouts for details. Newaxis¶ As for NumPy, new axes can be introduced by indexing an array with None cdef double[:] myslice = np.linspace(0, 10, num=50) # 2D array with shape (1, 50) myslice[None] # or myslice[None, :] # 2D array with shape (50, 1) myslice[:, None] One may mix new axis indexing with all other forms of indexing and slicing. See also an example. Comparison to the old buffer support¶ You will probably prefer memoryviews to the older syntax because: - The syntax is cleaner - Memoryviews do not usually need the GIL (see Memoryviews and the GIL) - Memoryviews are considerably faster For example, this is the old syntax equivalent of the sum3d function above: cpdef int old_sum3d(object[int, ndim=3, mode='strided'] arr): cdef int I, J, K, total = 0 I = arr.shape[0] J = arr.shape[1] K = arr.shape[2] for i in range(I): for j in range(J): for k in range(K): total += arr[i, j, k] return total Note that we can’t use nogil for the buffer version of the function as we could for the memoryview version of sum3d above, because buffer objects are Python objects. However, even if we don’t use nogil with the memoryview, it is significantly faster. This is a output from an IPython session after importing both versions: In [2]: import numpy as np In [3]: arr = np.zeros((40, 40, 40), dtype=int) In [4]: timeit -r15 old_sum3d(arr) 1000 loops, best of 15: 298 us per loop In [5]: timeit -r15 sum3d(arr) 1000 loops, best of 15: 219 us per loop Python buffer support¶ Cython memoryviews support nearly all objects exporting the interface of Python new style buffers. This is the buffer interface described in PEP 3118. NumPy arrays support this interface, as do Cython arrays. The “nearly all” is because the Python buffer interface allows the elements in the data array to themselves be pointers; Cython memoryviews do not yet support this. Memory layout¶ The buffer interface allows objects to identify the underlying memory in a variety of ways. With the exception of pointers for data elements, Cython memoryviews support all Python new-type buffer layouts. It can be useful to know or specify memory layout if the memory has to be in a particular format for an external routine, or for code optimization. Background¶ The concepts are as follows: there is data access and data packing. Data access means either direct (no pointer) or indirect (pointer). Data packing means your data may be contiguous or not contiguous in memory, and may use strides to identify the jumps in memory consecutive indices need to take for each dimension. NumPy arrays provide a good model of strided direct data access, so we’ll use them for a refresher on the concepts of C and Fortran contiguous arrays, and data strides. Brief recap on C, Fortran and strided memory layouts¶ The simplest data layout might be a C contiguous array. This is the default layout in NumPy and Cython arrays. C contiguous means that the array data is continuous in memory (see below) and that neighboring elements in the first dimension of the array are furthest apart in memory, whereas neighboring elements in the last dimension are closest together. For example, in NumPy: In [2]: arr = np.array([['0', '1', '2'], ['3', '4', '5']], dtype='S1') Here, arr[0, 0] and arr[0, 1] are one byte apart in memory, whereas arr[0, 0] and arr[1, 0] are 3 bytes apart. This leads us to the idea of strides. Each axis of the array has a stride length, which is the number of bytes needed to go from one element on this axis to the next element. In the case above, the strides for axes 0 and 1 will obviously be: In [3]: arr.strides Out[4]: (3, 1) For a 3D C contiguous array: In [5]: c_contig = np.arange(24, dtype=np.int8).reshape((2,3,4)) In [6] c_contig.strides Out[6]: (12, 4, 1) A Fortran contiguous array has the opposite memory ordering, with the elements on the first axis closest togther in memory: In [7]: f_contig = np.array(c_contig, order='F') In [8]: np.all(f_contig == c_contig) Out[8]: True In [9]: f_contig.strides Out[9]: (1, 2, 6) A contiguous array is one for which a single continuous block of memory contains all the data for the elements of the array, and therefore the memory block length is the product of number of elements in the array and the size of the elements in bytes. In the example above, the memory block is 2 * 3 * 4 * 1 bytes long, where 1 is the length of an int8. An array can be contiguous without being C or Fortran order: In [10]: c_contig.transpose((1, 0, 2)).strides Out[10]: (4, 12, 1) Slicing an NumPy array can easily make it not contiguous: In [11]: sliced = c_contig[:,1,:] In [12]: sliced.strides Out[12]: (12, 1) In [13]: sliced.flags Out[13]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Default behavior for memoryview layouts¶ As you’ll see in Specifying more general memory layouts, you can specify memory layout for any dimension of an memoryview. For any dimension for which you don’t specify a layout, then the data access is assumed to be direct, and the data packing assumed to be strided. For example, that will be the assumption for memoryviews like: int [:, :, :] my_memoryview = obj C and Fortran contiguous memoryviews¶ You can specify C and Fortran contiguous layouts for the memoryview by using the ::1 step syntax at definition. For example, if you know for sure your memoryview will be on top of a 3D C contiguous layout, you could write: cdef int[:, :, ::1] c_contiguous = c_contig where c_contig could be a C contiguous NumPy array. The ::1 at the 3rd position means that the elements in this 3rd dimension will be one element apart in memory. If you know you will have a 3D Fortran contiguous array: cdef int[::1, :, :] f_contiguous = f_contig If you pass a non-contiguous buffer, for example # This array is C contiguous c_contig = np.arange(24).reshape((2,3,4)) cdef int[:, :, ::1] c_contiguous = c_contig # But this isn't c_contiguous = np.array(c_contig, order='F') you will get a ValueError at runtime: /Users/mb312/dev_trees/minimal-cython/mincy.pyx in init mincy (mincy.c:17267)() 69 70 # But this isn't ---> 71 c_contiguous = np.array(c_contig, order='F') 72 73 # Show the sum of all the arrays before altering it /Users/mb312/dev_trees/minimal-cython/stringsource in View.MemoryView.memoryview_cwrapper (mincy.c:9995)() /Users/mb312/dev_trees/minimal-cython/stringsource in View.MemoryView.memoryview.__cinit__ (mincy.c:6799)() ValueError: ndarray is not C-contiguous Thus the ::1 in the slice type specification indicates in which dimension the data is contiguous. It can only be used to specify full C or Fortran contiguity. C and Fortran contiguous copies¶ Copies can be made C or Fortran contiguous using the .copy() and .copy_fortran() methods: # This view is C contiguous cdef int[:, :, ::1] c_contiguous = myview.copy() # This view is Fortran contiguous cdef int[::1, :] f_contiguous_slice = myview.copy_fortran() Specifying more general memory layouts¶ Data layout can be specified using the previously seen ::1 slice syntax, or by using any of the constants in cython.view. If no specifier is given in any dimension, then the data access is assumed to be direct, and the data packing assumed to be strided. If you don’t know whether a dimension will be direct or indirect (because you’re getting an object with a buffer interface from some library perhaps), then you can specify the generic flag, in which case it will be determined at runtime. The flags are as follows: - generic - strided and direct or indirect - strided - strided and direct (this is the default) - indirect - strided and indirect - contiguous - contiguous and direct - indirect_contiguous - the list of pointers is contiguous and they can be used like this: from cython cimport view # direct access in both dimensions, strided in the first dimension, contiguous in the last cdef int[:, ::view.contiguous] a # contiguous list of pointers to contiguous lists of ints cdef int[::view.indirect_contiguous, ::1] b # direct or indirect in the first dimension, direct in the second dimension # strided in both dimensions cdef int[::view.generic, :] c Only the first, last or the dimension following an indirect dimension may be specified contiguous: # INVALID cdef int[::view.contiguous, ::view.indirect, :] a cdef int[::1, ::view.indirect, :] b # VALID cdef int[::view.indirect, ::1, :] a cdef int[::view.indirect, :, ::1] b cdef int[::view.indirect_contiguous, ::1, :] The difference between the contiguous flag and the ::1 specifier is that the former specifies contiguity for only one dimension, whereas the latter specifies contiguity for all following (Fortran) or preceding (C) dimensions: cdef int[:, ::1] c_contig = ... # VALID cdef int[:, ::view.contiguous] myslice = c_contig[::2] # INVALID cdef int[:, ::1] myslice = c_contig[::2] The former case is valid because the last dimension remains contiguous, but the first dimension does not “follow” the last one anymore (meaning, it was strided already, but it is not C or Fortran contiguous any longer), since it was sliced. Memoryviews and the GIL¶ As you will see from the Quickstart section, memoryviews often do not need the GIL: cpdef int sum3d(int[:, :, :] arr) nogil: ... In particular, you do not need the GIL for memoryview indexing, slicing or transposing. Memoryviews require the GIL for the copy methods (C and Fortran contiguous copies), or when the dtype is object and an object element is read or written. Memoryview Objects and Cython Arrays¶ These typed memoryviews can be converted to Python memoryview objects (cython.view.memoryview). These Python objects are indexable, slicable and transposable in the same way that the original memoryviews are. They can also be converted back to Cython-space memoryviews at any time. They have the following attributes: - shape: size in each dimension, as a tuple. - strides: stride along each dimension, in bytes. - suboffsets - ndim: number of dimensions. - size: total number of items in the view (product of the shape). - itemsize: size, in bytes, of the items in the view. - nbytes: equal to sizetimes itemsize. - base And of course the aforementioned T attribute (Transposing). These attributes have the same semantics as in NumPy. For instance, to retrieve the original object: import numpy cimport numpy as cnp cdef cnp.int32_t[:] a = numpy.arange(10, dtype=numpy.int32) a = a[::2] print(a) print(numpy.asarray(a)) print(a.base) # this prints: # <MemoryView of 'ndarray' object> # [0 2 4 6 8] # [0 1 2 3 4 5 6 7 8 9] Note that this example returns the original object from which the view was obtained, and that the view was resliced in the meantime. Cython arrays¶ Whenever a Cython memoryview is copied (using any of the copy or copy_fortran methods), you get a new memoryview slice of a newly created cython.view.array object. This array can also be used manually, and will automatically allocate a block of data. It can later be assigned to a C or Fortran contiguous slice (or a strided slice). It can be used like: from cython cimport view my_array = view.array(shape=(10, 2), itemsize=sizeof(int), format="i") cdef int[:, :] my_slice = my_array It also takes an optional argument mode (‘c’ or ‘fortran’) and a boolean allocate_buffer, that indicates whether a buffer should be allocated and freed when it goes out of scope: cdef view.array my_array = view.array(..., mode="fortran", allocate_buffer=False) my_array.data = <char *> my_data_pointer # define a function that can deallocate the data (if needed) my_array.callback_free_data = free You can also cast pointers to array, or C arrays to arrays: cdef view.array my_array = <int[:10, :2]> my_data_pointer cdef view.array my_array = <int[:, :]> my_c_array Of course, you can also immediately assign a cython.view.array to a typed memoryview slice. A C array may be assigned directly to a memoryview slice: cdef int[:, ::1] myslice = my_2d_c_array The arrays are indexable and slicable from Python space just like memoryview objects, and have the same attributes as memoryview objects. CPython array module¶ An alternative to cython.view.array is the array module in the Python standard library. In Python 3, the array.array type supports the buffer interface natively, so memoryviews work on top of it without additional setup. Starting with Cython 0.17, however, it is possible to use these arrays as buffer providers also in Python 2. This is done through explicitly cimporting the cpython.array module as follows: cimport cpython.array def sum_array(int[:] view): """ >>> from array import array >>> sum_array( array('i', [1,2,3]) ) 6 """ cdef int total for i in range(view.shape[0]): total += view[i] return total Note that the cimport also enables the old buffer syntax for the array type. Therefore, the following also works: from cpython cimport array def sum_array(array.array[int] arr): # using old buffer syntax ... Coercion to NumPy¶ Memoryview (and array) objects can be coerced to a NumPy ndarray, without having to copy the data. You can e.g. do: cimport numpy as np import numpy as np numpy_array = np.asarray(<np.int32_t[:10, :10]> my_pointer) Of course, you are not restricted to using NumPy’s type (such as np.int32_t here), you can use any usable type. None Slices¶ Although memoryview slices are not objects they can be set to None and they can be checked for being None as well: def func(double[:] myarray = None): print(myarray is None) If the function requires real memory views as input, it is therefore best to reject None input straight away in the signature, which is supported in Cython 0.17 and later as follows: def func(double[:] myarray not None): ... Unlike object attributes of extension classes, memoryview slices are not initialized to None.
http://docs.cython.org/en/latest/src/userguide/memoryviews.html
2017-01-16T17:10:30
CC-MAIN-2017-04
1484560279224.13
[]
docs.cython.org
The printable area is displayed by a dashed border in a layout. The plotter and paper size you select determine the printable area. If your plotter reports an incorrect printable area for your paper size, you can adjust the printable area in the Modify Standard Paper Sizes area under the Modify Standard Paper Sizes (Printable Area) option on the Device and Document Settings tab in the Plotter Configuration Editor.
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-6007.htm
2017-01-16T17:08:31
CC-MAIN-2017-04
1484560279224.13
[]
docs.autodesk.com
Hit Counters¶ Every time a template is accessed a counter is incremented. To display the number of hits, put the following variable in any template: {hits} The hit count for each template can be manually altered in the template’s preferences under. Entry “View” Tracking¶ In addition to tracking page hits, EE lets you track the number of views a particular channel entry has received. For more information regarding this feature please read about the track_views= parameter.
https://docs.expressionengine.com/latest/templates/hit_counter.html
2017-01-16T17:15:27
CC-MAIN-2017-04
1484560279224.13
[]
docs.expressionengine.com
public interface CMPFieldStateFactory Implementations of this interface are used to create and compare field states for equality. Object getFieldState(Object fieldValue) fieldValue- field's value. boolean isStateValid(Object state, Object fieldValue) stateis equal to the field value's state (possibly, calculated with the getFieldState()method). state- the state to compare with field value's state. fieldValue- field's value, the state of which will be compared with state. stateequals to fieldValue's state.
http://docs.jboss.org/jbossas/javadoc/4.0.4/server/org/jboss/ejb/plugins/cmp/jdbc/CMPFieldStateFactory.html
2017-01-16T17:25:07
CC-MAIN-2017-04
1484560279224.13
[]
docs.jboss.org
Developers Guide Table of Contents Developer Scratch Space These pages are open to all, please feel free to update as you see fit. You may also add a child page to the above list. How to Print the Developers Guide - Please click on this link: [Geotools Developers Guide|] - Press the at the top corner of the screen - And print the page from your browser.
http://docs.codehaus.org/pages/viewpage.action?pageId=18921
2014-08-20T12:50:15
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
: .xhtml, .jspf, .jsp, .erb.: Please note you can run sonar analysis for an artefact in only one language. nodes. Duplication is reported if more than a minimum amount of nodes are replicated (in the same file or another file). The default minimum tokens is set to 5. Comments are counted by adding the lines for server side and client side - Run analysis directly from maven (without sonar) - More support for WCAG, webrichtlijnen - Enhanced validation of unified expressions (using JSFUnit?) - Dependency analysis
http://docs.codehaus.org/pages/viewpage.action?pageId=175964163
2014-08-20T12:45:17
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
: <versionScheme> <groupId>..</groupId> <artifactId>..</artifactId> <version>..</version> <\!-\- we may need to disallow version ranges here \--> </versionScheme> -: .
http://docs.codehaus.org/display/MAVEN/Versioning?focusedCommentId=109772813
2014-08-20T12:51:57
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
Development Guide Local Navigation Change the properties of a BlackBerry application project The properties of a BlackBerry® application project are contained in the BlackBerry_App_Descriptor.xml file. Note: The BlackBerry_App_Descriptor.xml file is located in the root of the project. A change to the BlackBerry_App_Descriptor.xml file does not trigger a Java® build and does not package a BlackBerry application project. Next topic: Project dependencies Previous topic: Delete a file from a BlackBerry application project Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/23674/Change_settings_BB_app_project_932562_11.jsp
2014-08-20T13:00:45
CC-MAIN-2014-35
1408500808153.1
[]
docs.blackberry.com