content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
No Employee Shifts are exactly what they sound like – shifts that don't have an employee assigned to them. You can use these to share "open shifts" with your employees, keep things organized as you build schedule, or maybe even build a template that represents your ideal shift coverage. This guide will walk you through all of the ways to use No Employee Shifts. We have a tutorial video that covers some of the information in this article. If you like watching instead of reading you should check it out! Not all schedules have No Employee Shifts enabled. As a manager you can disable the No Employee Shifts add-on. You can control which add-ons are enabled for you schedule on the add-ons page. No Employee Shifts on the Schedule If the schedule is sorted by employees you will notice that the top row is labeled No Employee. This is where you can add shifts that are not assigned to a specific employee. All No Employee Shifts have a blue background so you can easily spot them. Filling Shifts with No Employee Shifts When a No Employee Shift is published it will automatically be offered to your employees. This is similar to a shift swap, but the shift is not assigned to an employee yet. Your employees can see No Employee Shifts on their dashboard under the Shifts I Can Pickup section. They can pick up a No Employee Shift the same way they would pick up a shift swap. Multiple No Employee Shifts If you want to create multiple No Employee Shifts that are the same you can do this easily by setting a quantity. When creating or editing a No Employee Shift you will notice a quantity field in the edit pop-up. This field is only available for No Employee Shifts. When a shift has a quantity greater than 1 you will see a red badge on the shift that displays the quantity. As employees pick up this shift, the quantity will go down. Also, if you drag and drop this shift on the schedule it will automatically duplicate and reduce the quantity. Creating Coverage Templates with No Employee Shifts We often get asked how to build templates that represent ideal shift coverage. You can use No Employee Shifts to do this quite easily in ZoomShift. We wrote this help guide to explain how to do this.
http://docs.zoomshift.com/employee-scheduling/no-employee-open-shifts
2018-02-18T01:02:48
CC-MAIN-2018-09
1518891811243.29
[array(['https://images.contentful.com/7m65w4g847me/2iBMifGLjWukwQCSsuuEYe/5fbf791b47a0fbbccc4840a8b82dc38b/no-employee-shifts-1.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/3etEkyRJm88mCUKIiMs2Wk/14cb819515c68cf14e3396c1fd054690/no-employee-shifts-2.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/3xGRENINp6cmuaQYyqseGs/ae03fe0eed55369cf383aa019985e8f6/no-employee-shifts-3.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/4salXT1J9CAsM0Gcwsu0WW/12226af8c7eaf765d499a467dbd6b6f1/no-employee-shifts-4.png', None], dtype=object) ]
docs.zoomshift.com
CR 16-009 Rule Text Marriage and Family Therapy, Counseling, and Social Worker Examining Board (MPSW) Administrative Code Chapters Affected: Ch. MPSW 10 (Revised) Ch. MPSW 11 (Revised) Ch. MPSW 12 (Revised) Ch. MPSW 14 (Revised) Related to: Licensure, education, exam and supervised practice of professional counseling Hearing Date: Monday, February 15, 2016 Comment on related clearinghouse rule
http://docs.legis.wisconsin.gov/code/register/2016/721A3/register/rule_notices/cr_16_009_hearing_information/cr_16_009_rule_text
2017-08-16T21:46:42
CC-MAIN-2017-34
1502886102663.36
[]
docs.legis.wisconsin.gov
Binding RadSlider to ASP DataSource components RadSlider can now be data bound to a data source. When you data bind it, it automatically creates items and populates the Items collection. That is why to be able to data bind RadSlider you should explicitly set ItemType="Item" To control the data binding you should use the new ItemBinding inner property add the data bound items to already declared items by setting AppendDataBoundItems="true". Below is demonstrated how a data source for a RadSlider can be configured. You can also refer to the Databinding online demo, which contains a more detailed example of this feature. <telerik:RadSlider <ItemBinding TextField="Name" ToolTipField="Description" ValueField="ID" /> </telerik:RadSlider> <asp:SqlDataSource</asp:SqlDataSource> Database-friendly Value setting The RadSlider control provides DbValue property for setting a value in a database-friendly way. By default the RadSliderSlider, so that you can avoid problems due to setting incorrect type. The online demo Database-friendly Value setting provides an example of the setup that should be used in such scenario.
http://docs.telerik.com/devtools/aspnet-ajax/controls/slider/data-binding/binding-radslider-to-asp-datasource-components
2017-08-16T21:51:05
CC-MAIN-2017-34
1502886102663.36
[]
docs.telerik.com
The Cluvio free plan lets you use most of the Cluvio features with your business data, keeping all of the security options and without any limits on number of rows in your database. A detailed overview can be found in the Pricing page. The one major limitation that the Free plan has is the number of query executions per month that you can run. Here is an overview of what counts as a query execution: - Running a query in the Report editor - Using "Refresh" on a report (this disregards the result cache and runs the query) - Using "Refresh" on a dashboard (same as refreshing all reports on a dashboard) would add 1 query execution per report - Visiting a dashboard that does not have the results cached would run queries for all reports. Results are cached for 24 hours, so if you keep a dashboard open on a TV on a wall, it would run the queries up to 31x per month. - Changing dashboard filters on a dashboard re-runs the reports that use the changed parameter, unless the results for that combination of parameters are cached The monthly quota resets on the 1st of every month (UTC time), so in case you run of executions, you can either upgrade the plan or wait until the start of the next month. The currently used number of executions can be always checked in the Subscription admin page.
https://docs.cluvio.com/hc/en-us/articles/115001372045-Free-plan-limits
2017-08-16T21:43:05
CC-MAIN-2017-34
1502886102663.36
[array(['/hc/en-us/article_attachments/115001406709/Screenshot_2017-02-17_15.09.11.png', None], dtype=object) ]
docs.cluvio.com
2.4) ]['status'].append(m) c['workers'] = [ worker.Worker('bot-solaris', 'solarispasswd', notify_on_missing="[email protected]") ] 2.4.5.3. Local Workers¶ For smaller setups, you may want to just run the workers on the same machine as the master. To simplify the maintainance,.4. As of this writing, buildbot ships with an abstract base class for building latent workers, and a concrete implementation for AWS EC2 and for libvirt. never automatically shutdown. Supported Latent Workers¶ As of time of writing, Buildbot supports the following latent workers:.
http://docs.buildbot.net/0.9.4/manual/cfg-workers.html
2017-08-16T21:36:14
CC-MAIN-2017-34
1502886102663.36
[]
docs.buildbot.net
Great group several times at various shows around the country and have been impressed by their philosophy of local sourcing key ingredients in their products and spending the time and effort on research and sustainable production. At one of these shows I acquired the Docs Eco Eggs which I have to say, my fish go absolutely mad for. More recently I picked up a few packs of the Doc’s Eco Eggs Brew- with the intent to see how my corals liked it. As described by the Doc, John Hirsch M.D., himself- The Eco Egg Brew is a special brew of the Doc’s Eco Eggs in a rich brine solution. It has a 12.7% protein and 1.26 fat content that is high in omega 3, 6 and 9 fatty acids. What this translates to is a very thick, nutritious paste that has a vast array of particle sizes from the intact egg to tiny fragments suitable for the most picky filter feeders. The pack is a simple squirt bottle that can be used to directly feed the tank, but I prefer to first dilute the Brew in tank water and directly target feed the corals. My initial impressions were favorable, apart from the fish dining on the whole eggs, the diluted brew initiated a good feeding response on the corals I tried such as Acanthastrea, Fungia and various other LPS. Tubastrea were also a huge fan of the Brew. In each case the feeder tentacles could be seen avidly seeking particles and taking them up. I have been using this product now for a few months and have been impressed with the shelf life, ease of use and most importantly the overall health of the corals we are propagating. The only downside, if it is such, is the fish seem to really like this product. Dilution in tank water helps break the particles to a size that is overlooked by the fish, but there are quite a few eggs in the brew as well. To me this isn’t actually an issue at all as my fish look pretty healthy too. Another interesting response I have noted is that the various shrimp, acro crabs and other filter feeders such as Coco worms also react strongly to the addition of the Brew to the tank- especially the fire shrimp which actually come out into the light for once. Overall impression: I love this stuff, but more importantly my corals love it too. It may be wishful thinking on my part, but the Acans and Scolymia look incredibly vibrant and full and I am getting good overall growth on the other LPS frags in the system and I will continue to use this product in my systems. The Docs Eco Eggs Brew retails at 15.99 from their website; The Doc, and his team will be at MACNA in Denver. If you are lucky enough to be attending, swing by and check out some of their great videos of feeding responses to their various products.
http://docseco.com/2014/08/docs-eco-egg-brew-is-on-reefbuilders/
2017-08-16T21:43:02
CC-MAIN-2017-34
1502886102663.36
[array(['http://docseco.com/wp-content/uploads/2013/08/Docs-Eco-Brew.png', 'Docs-Eco-Brew'], dtype=object) ]
docseco.com
Add the Location Tracking Framework The Location Tracking framework tracks the location (latitude and longitude) of the user's device. It updates the Hurree app with the user's location. For example, your organisation can implement marketing campaigns that use this location-based information. This framework needs to be added separately, in Objective-C or Swift. Instructions for each are outlined below. In addition, integrators need to implement the code to enable pop-up to appear for the user. This section explains how to add this required framework to your project. To add the CoreLocation framework to your project - Add the CoreLocation.frameworkto enable location tracking. - Using the CLLocationManager, enable the location tracking opt-in for your users. User Opt-In Feature Location tracking is an opt-in feature; end users can decide whether to enable it or not.
https://docs.hurree.co/ios_sdk/add_ios_frameworks/add_the_location_tracking_framework.html
2017-08-16T21:46:30
CC-MAIN-2017-34
1502886102663.36
[array(['../../assets/39.png', 'Add Framework'], dtype=object) array(['../../assets/40.png', 'Set as Required'], dtype=object)]
docs.hurree.co
Security If you have set the security property 'sonar.forceAuthentication' to 'true', to run an analysis, it is now mandatory to set the the project cannot be accessed anonymously, the 'sonar.login' and 'sonar.password' properties are now required to run an existing user.analysis. See the following pages for detailed documentation on their usage (Security section): Upgrade the Motion Chart Plugin ...
http://docs.codehaus.org/pages/diffpages.action?originalId=230396494&pageId=230396556
2013-12-05T06:32:18
CC-MAIN-2013-48
1386163040712
[]
docs.codehaus.org
The)... Note: Create your new Joomla! 1.5 install in a separate. Return to Upgrade Instructions
http://docs.joomla.org/index.php?title=J1.5:Migrating_from_1.0.x_to_1.5_Stable&oldid=10096
2013-12-05T06:34:17
CC-MAIN-2013-48
1386163040712
[]
docs.joomla.org
... This tutorial tries to walk you through the idea of Aspects deployment we have introduced in AspectWerkz 1.. This tutorial is using AspectWerkz 2.0. You will learn how to integrate aspects and aspects libraries as regular components in your J2EE environment through the help of META-INF/aop.xml and WEB-INF/aop.xml files. ... I assume you have a Java 1.4 correctly installed. Download the 12.0 AspectWerkz release (or latest RC) and unzip it into a relevant location. I will use C:\aw\aspectwerkz\ This tutorial is based on the 12.0 beta1 version of AspectWerkz (you need to move file after unziping since this beta release contains some extra aspectwerkz3 subdirectory etc.). The latest distribution can be found here. ...-12.0-beta1.jar demoAOP\DemoAspect.java, but usage of the environment variable makes sense here to set the CLASSPATH: ... We carrfully exclude the package name of the aspect thus: As a consequence, the following AOP XML descriptor will be used. Write this file in C:\aw\tomcat\META-INF\aop.xml for now. ... ...
http://docs.codehaus.org/pages/diffpages.action?originalId=20566&pageId=228163688
2013-12-05T06:20:14
CC-MAIN-2013-48
1386163040712
[]
docs.codehaus.org
. The exception would be titles for images and other true titles like that..)
http://docs.joomla.org/index.php?title=JDOC_talk:Words_to_watch&oldid=32624
2013-12-05T06:39:12
CC-MAIN-2013-48
1386163040712
[]
docs.joomla.org
1 /* 2 * Copyright.client.core; 18 19 import java.io.IOException; 20 21 import org.springframework.ws.WebServiceMessage; 22 23 /** 24 * Defines the interface for objects than can resolve fault {@link WebServiceMessage}s. 25 * 26 * @author Arjen Poutsma 27 * @since 1.0.0 28 */ 29 public interface FaultMessageResolver { 30 31 /** 32 * Try to resolve the given fault message that got received. 33 * 34 * @param message the fault message 35 */ 36 void resolveFault(WebServiceMessage message) throws IOException; 37 38 }
http://docs.spring.io/spring-ws/sites/2.0/xref/org/springframework/ws/client/core/FaultMessageResolver.html
2013-12-05T06:19:19
CC-MAIN-2013-48
1386163040712
[]
docs.spring.io
The import event provides a framework that you can use to view and edit specific survey data that is referenced within the import event. An import event is created each time you import data using either the Import Survey Data wizard, the import commands on the survey network shortcut menu, or the import Survey LandXML command. Import Events are displayed as a collection within a database on the ToolspaceSurvey tab. The default name for the import event is the same as the imported file name "<File name>.<ext>". The Import Events collection contains the Networks, Figures, and Survey Points that are referenced from the specific import command, and provides a convenient way to remove, re-import, and reprocess linework, and insert survey data into the current drawing. Commands entered in the Survey Command Window or from using the Run Batch File command do not create an import event. To maintain the integrity of existing networks, you should create a separate network when you use the Survey Command Window. Use the following right-click commands to view and edit data in the Import Event collection and in individual import events.
http://docs.autodesk.com/CIV3D/2012/ENU/filesCUG/GUID-6CC6585F-9B98-4803-92DA-5A12B664A33-371.htm
2013-12-05T06:31:10
CC-MAIN-2013-48
1386163040712
[]
docs.autodesk.com
@GwtCompatible public interface Multiset<E> A collection that supports order-independent equality, like Set, but may have duplicate elements. A multiset is also sometimes called a bag. Elements of a multiset that are equal to one another (see "Note on element equivalence", below) are referred to as occurrences of the same single element. The total number of occurrences of an element in a multiset is called the count of that element (the terms "frequency" and "multiplicity" are equivalent, but not used in this API). Since the count of an element is represented as an int, a multiset may never contain more than Integer.MAX_VALUE occurrences of any one element. Multiset refines the specifications of several methods from Collection. It also defines an additional query operation, count(java.lang.Object), which returns the count of an element. There are five new bulk-modification operations, for example add(Object, int), to add or remove multiple occurrences of an element at once, or to set the count of an element to a specific value. These modification operations are optional, but implementations which support the standard collection operations add(Object) or remove(Object) are encouraged to implement the related methods as well. Finally, two collection views are provided: elementSet() contains the distinct elements of the multiset "with duplicates collapsed", and entrySet() is similar but contains Multiset.Entry instances, each providing both a distinct element and the count of that element. In addition to these required methods, implementations of Multiset are expected to provide two static creation methods: create(), returning an empty multiset, and create(Iterable<? extends E>), returning a multiset containing the given initial elements. This is simply a refinement of Collection's constructor recommendations, reflecting the new developments of Java 5. As with other collection types, the modification operations are optional, and should throw UnsupportedOperationException when they are not implemented. Most implementations should support either all add operations or none of them, all removal operations or none of them, and if and only if all of these are supported, the setCount methods as well. A multiset uses Object.equals(java.lang.Object) to determine whether two instances should be considered "the same," unless specified otherwise by the implementation. int count(@Null. element- the element to count occurrences of int add(@Nullable E element, int occurrences) occurrences == 1, this method has the identical effect to add(Object). This method is functionally equivalent (except in the case of overflow) to the call addAll(Collections.nCopies(element, occurrences)), which would presumably perform much more poorly. element- the element to add occurrences of; may be nullonly if explicitly allowed by the implementation occurrences- the number of occurrences of the element to add. May be zero, in which case no change will be made. IllegalArgumentException- if occurrencesis negative, or if this operation would result in more than Integer.MAX_VALUEoccurrences of the element NullPointerException- if elementis null and this implementation does not permit null elements. Note that if occurrencesis zero, the implementation may opt to return normally. int remove(@Nullable Object element, int occurrences) occurrences == 1, this is functionally equivalent to the call remove(element). element- the element to conditionally remove occurrences of occurrences- the number of occurrences of the element to remove. May be zero, in which case no change will be made. IllegalArgumentException- if occurrencesis negative int setCount(E element, int count) element- the element to add or remove occurrences of; may be null only if explicitly allowed by the implementation count- the desired count of the element in this multiset IllegalArgumentException- if countis negative NullPointerException- if elementis null and this implementation does not permit null elements. Note that if countis zero, the implementor may optionally return zero instead. boolean setCount(E element, int oldCount, int newCount) setCount(Object, int), provided that the element has the expected current count. If the current count is not oldCount, no change is made.. IllegalArgumentException- if oldCountor newCountis negative NullPointerException- if elementis null and the implementation does not permit null elements. Note that if oldCountand newCountare both zero, the implementor may optionally return trueinstead.. boolean equals(@Nullable Object object) trueif the given object is also a multiset and contains equal elements with equal counts, regardless of order. equalsin interface Collection<E> equalsin class Object object- the reference object with which to compare. trueif this object is the same as the obj argument; falseotherwise. Object.hashCode(), Hashtable int hashCode() over all distinct elements in the multiset. It follows that a multiset and its entry set always have the same hash code.over all distinct elements in the multiset. It follows that a multiset and its entry set always have the same hash code. ((element == null) ? 0 : element.hashCode()) ^ count(element) hashCodein interface Collection<E> hashCodein class Object Object.equals(java.lang.Object), Hashtable()) It is recommended, though not mandatory, that this method return the result of invoking toString() on the entrySet(), yielding a result such as [a x 3, c, d x 2, e]. toStringin class Object Iterator<E> iterator() Elements that occur multiple times in the multiset will appear multiple times in this iterator, though not necessarily sequentially. iteratorin interface Collection<E> iteratorin interface Iterable<E> boolean contains(@Nullable Object element) This method refines Collection.contains(java.lang.Object) to further specify that it may not throw an exception in response to element being null or of the wrong type. containsin interface Collection<E> element- the element to check for trueif this multiset contains at least one occurrence of the element boolean containsAll(Collection<?> elements) trueif this multiset contains at least one occurrence of each element in the specified collection. This method refines Collection.containsAll(java.util.Collection>) to further specify that it may not throw an exception in response to any of elements being null or of the wrong type. Note: this method does not take into account the occurrence count of an element in the two collections; it may still return true even if elements contains several occurrences of an element and this multiset contains only one. This is no different than any other collection type like List, but it may be unexpected to the user of a multiset. containsAllin interface Collection<E> elements- the collection of elements to be checked for containment in this multiset trueif this multiset contains at least one occurrence of each element contained in elements NullPointerException- if elementsis null Collection.contains(Object) boolean add(E element) This method refines Collection.add(E), which only ensures the presence of the element, to further specify that a successful call must always increment the count of the element, and the overall size of the collection, by one. addin interface Collection<E> element- the element to add one occurrence of; may be null only if explicitly allowed by the implementation truealways, since this call is required to modify the multiset, unlike other Collectiontypes NullPointerException- if elementis null and this implementation does not permit null elements IllegalArgumentException- if Integer.MAX_VALUEoccurrences of elementare already contained in this multiset boolean remove(@Nullable Object element) This method refines Collection.remove(java.lang.Object) to further specify that it may not throw an exception in response to element being null or of the wrong type. removein interface Collection<E> element- the element to remove one occurrence of trueif an occurrence was found and removed boolean removeAll(Collection<?> c) This method refines Collection.removeAll(java.util.Collection>) to further specify that it may not throw an exception in response to any of elements being null or of the wrong type. removeAllin interface Collection<E> c- collection containing elements to be removed from this collection Collection.remove(Object), Collection.contains(Object) boolean retainAll(Collection<?> c) This method refines Collection.retainAll(java.util.Collection>) to further specify that it may not throw an exception in response to any of elements being null or of the wrong type. retainAllin interface Collection<E> c- collection containing elements to be retained in this collection Collection.remove(Object), Collection.contains(Object)
http://docs.guava-libraries.googlecode.com/git-history/v9.0/javadoc/com/google/common/collect/Multiset.html
2013-12-05T06:18:44
CC-MAIN-2013-48
1386163040712
[]
docs.guava-libraries.googlecode.com
3.7.4.1. 'content'¶ Default: 60 Cache expiration (in seconds) for show_placeholder and page_url template tags. Note This settings was previously called CMS_CONTENT_CACHE_DURATION django CMS has a number of settings to configure its behaviour. These should be available in your settings.py file. = { 'content': { 'plugins': ['TextPlugin', 'PicturePlugin'], 'text_only_plugins': ['LinkPlugin'] 'extra_context': {"width":640}, 'name': gettext("Content"), 'language_fallback': True, 'child_classes': { 'TextPlugin': ['PicturePlugin', 'LinkPlugin'], }, 'parent_classes': { 'LinkPlugin': ['TextPlugin', 'StackPlugin'], } }, 'right-column': { "plugins": ['TeaserPlugin', 'LinkPlugin'], "extra_context": {"width":280}, 'name': gettext("Right Column"), 'limits': { 'global': 2, 'TeaserPlugin': 1, 'LinkPlugin': 1, }, 'plugin_modules': { 'LinkPlugin': 'Extra', }. 'plugin_labels': { 'LinkPlugin': 'Add a link', }. }, 'base.html content': { "plugins": ['TextPlugin', 'PicturePlugin', 'TeaserPlugin'] }, } You can combine template names and placeholder names to granularly define plugins, as shown above with base.html content. accessible in the frontend. You may want for example to keep a langage., databases using django-reversion could grow huge. To help address this issue, only published revisions will now be saved. This setting declares how many published revisions are saved in the database. By default the newest 25 published revisions are kept; all others are deleted when you publish a page. If set to 0 all published revisions are kept, but you will need to ensure that the revision table does not grow excessively large.
http://django-cms.readthedocs.org/en/develop/getting_started/configuration.html
2013-12-05T06:20:58
CC-MAIN-2013-48
1386163040712
[]
django-cms.readthedocs.org
StAX API are explained in greater detail later in this lesson, but their main features are briefly described below. Cursor API. Iterator API XMLEvent table.; // ... } Iterator Event Types XMLEvent Types Defined in the Event Iterator API Note that the DTD , EntityDeclaration , EntityReference , NotationDeclaration , and ProcessingInstruction events are only created if the document being processed contains a DTD. Example of Event Mapping the following table. Note that secondary events, shown in curly braces ( \{\} ), are typically accessed from a primary event rather than directly.has a corresponding EndElement, even for empty elements. Attributeevents are treated as secondary events, and are accessed from their corresponding StartElementevent. Similar to Attributeevents, Namespaceevents are treated as secondary, but appear twice and are accessible twice in the event stream, first from their corresponding StartElementand then from their corresponding EndElement. Characterevents are specified for all elements, even if those elements have no character data. Similarly, Characterevents can be split across events. The StAX parser maintains a namespace stack, which holds information about all XML namespaces defined for the current element and its ancestors. The namespace stack, which is exposed through the javax\.xml\.namespace\.NamespaceContextinterface, can be accessed by namespace prefix or URI. Choosing between Cursor and Iterator APIs It is reasonable to ask at this point, “What API should I choose? Should I create instances of XMLStreamReader or XMLEventReader ? Why are there two kinds of APIs anyway?” Development Goals The authors of the StAX specification targeted three types of developers: Library and infrastructure developers : . Comparing Cursor and Iterator APIs Before choosing between the cursor and iterator APIs, you should note a few things that you can do with the iterator API that you cannot do with the cursor API: Objects created from the XMLEventsubclasses are immutable, and can be used in arrays, lists, and maps, and can be passed through your applications even after the parser has moved on to subsequent events. You can create subtypes of XMLEventthat Java ME,.
https://www.docs4dev.com/docs/en/java/java8/tutorials/jaxp-stax-api.html
2021-06-12T23:32:04
CC-MAIN-2021-25
1623487586465.3
[]
www.docs4dev.com
Step 2: Set Up the AWS Command Line Interface (AWS CLI) You don't need the AWS CLI to perform the steps in the Getting Started exercises. However, some of the other exercises in this guide do require it. If you prefer, you can skip this step and go to Step 3: Getting Started Using the Amazon Comprehend Console, and set up the AWS CLI later. To set up the AWS CLI Download and configure the AWS CLI. For instructions, see the following topics in the AWS Command Line Interface User Guide: In the AWS CLI config file, add a named profile for the administrator user:. [profile adminuser] aws_access_key_id = adminuser access key IDaws_secret_access_key = adminuser secret access keyregion = aws-region You use this profile when executing the AWS CLI commands. help Next Step Step 3: Getting Started Using the Amazon Comprehend Console
https://docs.aws.amazon.com/comprehend/latest/dg/setup-awscli.html
2021-06-12T23:41:33
CC-MAIN-2021-25
1623487586465.3
[]
docs.aws.amazon.com
Sampling data for a running job You can sample data from a running job. This is useful if you want to inspect the data to make sure the job is producing the results you expect. - Select Console on the main menu. - Click on the SQL Jobs tab. - Select the job you would like to edit. - Select the Details tab at the bottom. - Select the Edit Selected Job button.The SQL window in Edit Mode appears. - Click the Sample button.
https://docs.cloudera.com/csa/1.3.0/ssb-job-lifecycle/topics/csa-ssb-sampling-data.html
2021-06-13T00:00:06
CC-MAIN-2021-25
1623487586465.3
[]
docs.cloudera.com
This product has been discontinued. We recommend to use Groups WooCommerce and WooCommerce Subscriptions instead..
https://docs.itthinx.com/document/groups-paypal/
2021-06-12T23:29:49
CC-MAIN-2021-25
1623487586465.3
[]
docs.itthinx.com
A Particle SystemA component that simulates fluid entities such as liquids, clouds and flames by generating and animating large numbers of small 2D images in the scene. More info See in Glossary can use Unity’s C# Job System to apply custom behaviors to particles. Unity distributes work from the C# Job System across worker threads, and can make use of the Burst Compiler. The GetParticles() and SetParticles() methods offer similar functionality, but run on the main thread and cannot make use of Unity’s Burst Compiler. By default, a Particle System job only has access to one or more particles belonging to that Particle System. Unity passes this data to the job using a ParticleSystemJobData struct. You must pass any other data that the job requires as additional parameters. To access particle data, Unity supports the following job types: This job type executes a single job on a single worker thread. The job has access to every particle belonging to the Particle System. For example code on this job type, see the IJobParticleSystem.Execute() Scripting reference. This job type executes multiple jobs across multiple worker threads. Each job can only access the particle at the index specified by the job’s Execute() function. For example code on this job type, see the IJobParticleSystemParallelFor.Execute(). This job type executes multiple jobs across multiple worker threads. Each job can only access the particles within the range specified by the job’s Execute() function. For example code on this job type, see the IJobParticleSystemParallelForBatch.Execute(). As with any other C# job, you can use the Burst Compiler to compile your particle jobs into highly optimized Burst jobs. For more information, see the Burst Compiler documentation. New feature in Unity 2019.3
https://docs.unity3d.com/Manual/particle-system-job-system-integration.html
2021-06-12T23:45:00
CC-MAIN-2021-25
1623487586465.3
[]
docs.unity3d.com
Raw Types A raw type is the name of a generic class or interface without any type arguments. Object s. For backward compatibility, assigning a parameterized type to its raw type is allowed: Box<String> stringBox = new Box<>(); Box rawBox = stringBox; // OK But Annotations.
https://www.docs4dev.com/docs/en/java/java8/tutorials/java-generics-rawTypes.html
2021-06-13T00:31:58
CC-MAIN-2021-25
1623487586465.3
[]
www.docs4dev.com
Basic Search The simplest form of search requires that you specify the set of attributes that an entry must have and the name of the target context in which to perform the search. The following code creates an attribute set matchAttrs , which has two attributes "sn" and "mail" . It specifies that the qualifying entries must have a surname ( "sn" ) attribute with a value of "Geisel" and a "mail" attribute with any value. It then invokes DirContext.search() to search the context "ou=People" for entries that have the attributes specified by matchAttrs . // Specify the attributes to match // Ask for objects that has a surname ("sn") attribute with // the value "Geisel" and the "mail" attribute // ignore attribute name case Attributes matchAttrs = new BasicAttributes(true); matchAttrs.put(new BasicAttribute("sn", "Geisel")); matchAttrs.put(new BasicAttribute("mail")); // Search for objects that have those matching attributes NamingEnumeration answer = ctx.search("ou=People", matchAttrs); You can then print the results as follows. while (answer.hasMore()) { SearchResult sr = (SearchResult)answer.next(); System.out.println(">>>" + sr.getName()); printAttrs(sr.getAttributes()); } printAttrs\(\) is similar to the code in the getAttributes() example that prints an attribute set. Running this example produces the following result. # java SearchRetAll >>>cn=Ted Geisel attribute: sn value: Geisel attribute: objectclass value: top value: person value: organizationalPerson value: inetOrgPerson attribute: jpegphoto value: [B@1dacd78b attribute: mail value: [email protected] attribute: facsimiletelephonenumber value: +1 408 555 2329 attribute: cn value: Ted Geisel attribute: telephonenumber value: +1 408 555 5252 Returning Selected Attributes The previous example returned all attributes associated with the entries that satisfy the specified query. You can select the attributes to return by passing search\(\) an array of attribute identifiers that you want to include in the result. After creating the matchAttrs as shown previously, you also need to create the array of attribute identifiers, as shown next. // Specify the ids of the attributes to return String[] attrIDs = {"sn", "telephonenumber", "golfhandicap", "mail"}; // Search for objects that have those matching attributes NamingEnumeration answer = ctx.search("ou=People", matchAttrs, attrIDs); This example returns the attributes "sn" , "telephonenumber" , "golfhandicap" , and "mail" of entries that have an attribute "mail" and have a "sn" attribute with the value "Geisel" . This example produces the following result. (The entry does not have a "golfhandicap" attribute, so it is not returned.) # java Search >>>cn=Ted Geisel attribute: sn value: Geisel attribute: mail value: [email protected] attribute: telephonenumber value: +1 408 555 5252
https://www.docs4dev.com/docs/en/java/java8/tutorials/jndi-ops-basicsearch.html
2021-06-13T00:35:04
CC-MAIN-2021-25
1623487586465.3
[]
www.docs4dev.com
The registration widget editor Once you widget has been created you'll be taken to the widget editor. The editor is separated into 3 sections, the toolbar, the settings panel and the preview. Toolbar The toolbar has icons for performing general tasks like saving the widget, adding a new block (see below), accessing the editor settings, exiting the editor and accessing this help document. Settings Panel Settings are split into groups, and each group has one or more settings. You can change any of the settings in a group and see the changes in real-time in the preview pane. Preview Pane The preview pane shows you a real-time preview of what your widget will look like on your site. General Settings Before you can save your widget you'll need to give it a name and choose a webinar that this widget will be associated with. Layout Settings Layout settings let you set the background color of your widget as well as the max width and any padding or border options. Error Messages Display as Popup Blocks Widgets are made up of multiple blocks, for example text, countdown timers, buttons, input forms. These blocks make the design of the form. You can add as many blocks to your form as you want. A typical widget might have a text block to show a title, a countdown timer, input form and an action button. Each block has its own options that let you customize its look and feel. Adding new blocks You can add blocks to your form by clicking on the + bicon in the toolbar, this will add a new block to the end of the form. Alternatively you can insert a new block above an existing block by selecting the existing block and clicking on the + icon to the right. Then choose the block type you want to add and it will be added to the form Deleting Blocks If you want to remove a block just select the block's options in the sidebar and click on the trash can Reordering blocks Blocks can be moved up and down the form by using the up and down arrows in the blocks options. Next Steps Let's take a look at the the different block types.
http://docs.getwebinarpress.com/article/81-registration-widget-editor
2021-06-12T22:36:30
CC-MAIN-2021-25
1623487586465.3
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5afd7e400428635ba8b26775/images/5c45e9962c7d3a66e32d7180/file-W6zfPv14Pl.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5afd7e400428635ba8b26775/images/5c4355ac042863543ccc03b8/file-hbRAcQR9QO.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5afd7e400428635ba8b26775/images/5c4352e5042863543ccc03ad/file-2w1xtF9aUg.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5afd7e400428635ba8b26775/images/5c43545d042863543ccc03b1/file-XouKvG2Sed.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5afd7e400428635ba8b26775/images/5c4354932c7d3a66e32d68f1/file-gqCnQxRKKN.png', None], dtype=object) ]
docs.getwebinarpress.com
A newer version of this page is available. Switch to the current version. Command Item ASPxCardView's Command items allow end users to select and delete cards, switch the card view to edit mode, update data, and much more. A command item represents a single command. The eight available command items are as follows: New, Edit, Delete, Select, Update, Clear, Apply and Cancel. The Delete item, for instance, allows users to delete cards. A command item is displayed as a link by default. It can also be a button or an image. Feedback
https://docs.devexpress.com/AspNet/114146/aspnet-webforms-controls/card-view/visual-elements/command-item?v=19.2
2021-06-12T23:31:06
CC-MAIN-2021-25
1623487586465.3
[array(['/AspNet/images/aspxcardview_visualelements_commanditem118498.png?v=19.2', 'ASPxCardView_VisualElements_CommandItem'], dtype=object) ]
docs.devexpress.com
Wstęp do AR/VR. Serwer AR/VR stereoscopic image to stereoscopic the latter case, a viewport that shows its output on screen will show an undistorted version of the left eye, while showing the fully processed stereoscopic. Nowe węzły AR/VR¶ Three new node types have been added for supporting AR and VR in Godot and one additional node type especially for AR. These are: ARVROrigin - nasz punkt startowy w świecie ARVRCamera - a special subclass of the camera, which is positionally tracked ARVRController - a new spatial class, which position of the origin point which you will have to adjust. ARVRCamera jest drugim węzłem, który zawsze musi być częścią twojej sceny i zawsze musi być węzłem podrzędnym twojego węzła początkowego. Jest to podklasa normalnego aparatu Godot. Jednak jego pozycja jest automatycznie aktualizowana w każdej klatce w oparciu o fizyczną orientację i pozycji HMD. Ze względu na precyzję wymaganą do renderowania na HMD lub do renderowania nakładki AR na prawdziwej kamerze, większość właściwości kamery jest ignorowana. Jedynym z używanych właściwości kamery jest właściwość rozróżniania bliskiej i dalekiej płaszczyzny. FOV, proporcje obrazu i tryb projekcji są nieużywane. with the native mobile interface. Obviously, you need to add something more into your scene, so there is something to see, but after that, you can export the game to your phone of choice, pop it into a viewer and away you go. Official plugins and resources¶ As mentioned earlier, Godot does not support the various VR and AR SDKs out of the box, you need a plugin for the specific SDK you want to use. There are several official plugins available in the GodotVR Repository. Godot Oculus Mobile provides support for the Oculus Go and Oculus Quest. The Quest will require additional setup documented in Developing for Oculus Quest. Godot OpenVR (not to be confused with OpenXR) supports the OpenVR SDK used by Steam. Sforkuj repozytorium Godota. Godot OpenHMD supports OpenHMD, an open source API and drivers for headsets. Godot OpenXR supports OpenXR, an open standard for VR and AR software. This plugin is early in development, only supports Linux and requires extra setup described in the repository. These plugins can be downloaded from GitHub or the Godot Asset Library. In addition to the plugins, there are several official demos. Sforkuj repozytorium Godota. Godot OpenVR FPS (the tutorial for this project is VR starter tutorial part 1). Godot XR tools, which shows implementations for VR features such as movement and picking up objects. Other things to consider¶ There are a few other subjects that we need to briefly touch upon in this primer that are important to know. Pierwsze co to nasze jednostki. W normalnych grach 3D nie musisz wiele myśleć o jednostkach. Dopóki wszystko jest w tej samej skali, jednak gdy pudełko o wymiarach 1 ,na 1 ,na 1 jednostkę w AR możemy mieć dowolny rozmiar w zależności od perspektywy, który może ktoś trzymać w dłoni lub czasami porównywać do wielkości budynku. W AR i VR zmienia się to, ponieważ rzeczy w twoim wirtualnym świecie są odwzorowywane na podstawie prawdziwym świecie. Jeśli w rzeczywistym świecie zrobisz krok np. 1 krok w przód, lecz jeśli w wirtualnym świecie wykonasz tylko 1 cm, mamy problem. To samo z pozycją kontrolerów ,jeśli nie pojawiają się we właściwej pozycji względem gracza, przerywa to zanurzenie w grze. Większość platform na VR, podczas gdy są na Serwerze AR/VR, zakłada , że 10 jednostek = 1 metr. Performance is another thing that needs to be carefully considered. Especially VR taxes your game a lot more than most people realize. acceptablyoveated.
https://docs.godotengine.org/pl/latest/tutorials/vr/vr_primer.html
2021-06-12T23:37:28
CC-MAIN-2021-25
1623487586465.3
[array(['../../_images/minimum_setup.png', '../../_images/minimum_setup.png'], dtype=object)]
docs.godotengine.org
RLMSyncSubscriptionOptions @interface RLMSyncSubscriptionOptions : NSObject Configuration options for query-based sync subscriptions. The name of the subscription. Naming a subscription makes it possible to look up a subscription by name (using -[RLMRealm subscriptionWithName:]) or update an existing subscription rather than creating a new one. Declaration Objective-C @property (nonatomic, copy, nullable) NSString *name; Swift var name: String? { get set } Whether this should update an existing subscription with the same name. By default trying to create a subscription with a name that’s already in use will fail unless the new subscription is an exact match for the existing one. If this is set to YES, instead the existing subscription will be updated using the query and options from the new subscription. This only works if the new subscription is for the same type of objects as the existing subscription. Trying to overwrite a subscription with a subscription of a different type of objects will fail. The updatedAtand (if timeToLiveis used) expiresAtproperties are updated whenever a subscription is overwritten even if nothing else has changed. Declaration Objective-C @property (nonatomic) BOOL overwriteExisting; Swift var overwriteExisting: Bool { get set } How long (in seconds) a subscription should persist after being created. By default subscriptions are persistent, and last until they are explicitly removed by calling unsubscribe(). Subscriptions can instead be made temporary by setting the time to live to how long the subscription should remain. After that time has elapsed the subscription will be automatically removed. A time to live of 0 or less disables subscription expiration. Declaration Objective-C @property (nonatomic) NSTimeInterval timeToLive; Swift var timeToLive: TimeInterval { get set } The maximum number of top-level matches to include in this subscription. If more top-level objects than the limit match the query, only the first limitobjects will be included. This respects the sort and distinct order of the query being subscribed to for the determination of what the “first” objects are. The limit does not count or apply to objects which are added indirectly due to being linked to by the objects in the subscription or due to being listed in includeLinkingObjectProperties. If the limit is larger than the number of objects which match the query, all objects will be included. A limit of zero is treated as unlimited. Declaration Objective-C @property (nonatomic) NSUInteger limit; Swift var limit: UInt { get set } Which RLMLinkingObjects properties should be included in the subscription. Outgoing links (i.e. RLMArrayand RLMObjectproperties) are automatically included in sync subscriptions. That is, if you subscribe to a query which matches one object, every object which is reachable via links from that object are also included in the subscription. By default, RLMLinkingObjects properties do not work this way. Instead, they only report objects which happen to be included in a subscription. By naming a RLMLinkingObjects property in this array, it can instead be treated as if it was a RLMArray and include all objects which link to this object. Any keypath which ends in a RLMLinkingObject property can be included in this array, including ones involving intermediate links. Declaration Objective-C @property (nonatomic, copy, nullable) NSArray<NSString *> *includeLinkingObjectProperties; Swift var includeLinkingObjectProperties: [String]? { get set }
https://docs.mongodb.com/realm-legacy/docs/objc/5.4.2/api/Classes/RLMSyncSubscriptionOptions.html
2021-06-12T23:12:10
CC-MAIN-2021-25
1623487586465.3
[]
docs.mongodb.com
Player Portal OverviewOverview The AccelByte Player Portal is a digital distribution website for purchasing video games. It also gives players a convenient place to read the news, purchase in-game items, contact customer service, and manage their personal profiles. The Player Portal also includes high-level features such as secure login and logout for players, easy registration with 3rd party accounts, and other supporting features such as: - Wordpress integration to display news, game instructions, and in-game descriptions. - Wallet recharge so players can top-up their available funds. - Advanced 3rd party accounts management. Managing a User Profile in the Player PortalManaging a User Profile in the Player Portal After you have logged into the Player Portal, you can manage your account data such as your profile, orders, and password. How to Manage a User ProfileHow to Manage a User Profile On the landing page, hover over the profile icon and select My Account. On the My Account page, click the My Profile menu on the sidebar. Modify the fields you want to edit, then click the Save Changes button when you’re done. How to Recharge Coins Using the Pay StationHow to Recharge Coins Using the Pay Station On the landing page, hover over the profile button and select Recharge. Choose which package you’d like to buy and click Buy under the desired package. Choose how many packages you’d like to buy from the Quantity dropdown menu. The Amount to pay will update if you modify the quantity. When you’re done, click Continue to Payment. After that, you will be directed to the AccelByte Pay Station window. Select the desired payment method and follow the onscreen instructions. Click Show More Methods to load more payment options. How to Manage Purchased ItemsHow to Manage Purchased Items On the landing page, hover over the profile icon and select Order History. Stadia account. Choose Stadia and the Google Player Portal accounts, your game progress from the previous account will be lost. Confirm your account by clicking on the Yes button. When you’re done, your account will be added to the Linked Accounts list. Your Stadia account will be successfully linked. How to Change Your PasswordHow to Change Your Password On the landing page, hover over the profile button and select Change Password.. Downloading AccelByte Launcher from Player PortalDownloading AccelByte Launcher from Player Portal Your players will need to download AccelByte Launcher in order to play their downloaded games. How to Download the AccelByte LauncherHow to Download the AccelByte Launcher On the landing page, click Get Launcher. Launcher will start to download immediately. After you have downloaded the Launcher, install it and log in using the same Email and Password you use to log into the Player Portal. Buying In-Game ItemsBuying In-Game Items Players can buy in-game items for your games from the Player Portal. How to Buy In-Game ItemsHow to Buy In-Game Items On the Player Portal landing page, go to the Store page. Select which game store you want to browse. In this tutorial, the AB Shooter Game store page is selected. Choose which items you want to purchase from the provided list. You can click on any item to see more information about it. When you’ve decided what items to buy, click Buy Now. Then select the payment method you want to use from the dropdown that appears. In this tutorial, we’re choosing to pay with real currency. The Checkout window appears. After confirming your purchase, click Continue to Payment. Next, choose your desired Payment Method and follow the onscreen instructions. After your payment is processed you will receive your item. It will also appear in your Order History. Buying a Product KeyBuying a Product Key How to Buy a Product KeyHow to Buy a Product Key Go to the Store page, then switch to the Product Key tab. Redeem Code. Input the campaign code in the field to redeem the item. In this case, the code grants the player 125 JC. Here you can see that 125 JC has been added to the player’s balance. Subscription PlanSubscription Plan. After your payment has been processed, you can explore the store and access the exclusive, subscription-only content.
http://docs-dev.accelbyte.io/docs/essential-service-guides/pp-launcher-patcher/player-portal/
2021-06-13T00:14:54
CC-MAIN-2021-25
1623487586465.3
[]
docs-dev.accelbyte.io
Timeline 08/10/06: - 15:32 Changeset [8] by - initial import of sjf2410-linux - 15:31 Changeset [7] by - initial import of s3c2410 usb loader - 15:30 Changeset [6] by - host and target - 15:30 Changeset [5] by - src and doc - 15:29 Changeset [4] by - release tags - 15:29 Changeset [3] by - developers - 15:29 Changeset [2] by - branches directory - 15:29 Changeset [1] by - create trunk 08/09/06: - 22:41 Ticket #2 (SD card driver unstable) created by - The SD card driver is not very stable and causes file system corruption. … - 22:37 Ticket #1 (kernel is running way too slow) created by - The 2.7.17.7 kernel is running way too slow at the moment. Overal system … 07/12/06: - 21:40 Changeset [792] by - - Check if name text is NULL before g_utf8_strlen on delete dialog - … Note: See TracTimeline for information about the timeline view.
http://docs.openmoko.org/trac/timeline?from=2006-08-10T15%3A29%3A27Z%2B0200&precision=second
2013-05-19T00:02:52
CC-MAIN-2013-20
1368696382989
[]
docs.openmoko.org
This. The standard reST inline markup is quite simple: use If asterisks or backquotes appear in running text and could be confused with inline markup delimiters, they have to be escaped with a backslash. Be aware of some restrictions of this markup:. Literal code blocks are introduced by ending a paragraph with the special marker ::. The literal block must be indented:. 3.10. Comments¶ Every explicit markup block which isn’t a valid markup construct (like the footnotes above) is regarded as a comment.
http://docs.python.org/release/2.7/documenting/rest.html
2013-05-19T00:15:18
CC-MAIN-2013-20
1368696382989
[]
docs.python.org
JCacheStorageCachelite/11.1 - Revision history 2013-05-19T00:09:46Z Revision history for this page on the wiki MediaWiki 1.19.3 There is no edit history for this page. 2013-05-19T00:09:46Z <p>The requested page does not exist. It may have been deleted from the wiki, or renamed. Try <a href="/Special:Search" title="Special:Search">searching on the wiki</a> for relevant new pages. </p>
http://docs.joomla.org/index.php?title=JCacheStorageCachelite/11.1&feed=atom&action=history
2013-05-19T00:09:47
CC-MAIN-2013-20
1368696382989
[]
docs.joomla.org
The CPython interpreter scans the command line and the environment for various settings. CPython implementation detail: Other implementations’ command line schemes may differ. See Alternate Implementations for further resources. When invoking Python, you may specify any of these options: python [-BdEiOQs runpy.run_module() The actual implementation of this feature. PEP 338 – Executing modules as scripts New in version 2.4. Changed in version 2.5: The named module can now be located inside a package.. Division control. The argument must be one of the following: Don’t add user site directory to sys.path New in version 2.6...
http://docs.python.org/release/2.6.6/using/cmdline.html?highlight=pythonpath
2013-05-19T00:15:12
CC-MAIN-2013-20
1368696382989
[]
docs.python.org
Welcome to Detecto’s documentation!¶ Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inference on still images and videos, transfer learning on custom datasets, and serialization of models to files are just a few of Detecto’s features. Detecto is also built on top of PyTorch, allowing an easy transfer of models between the two libraries. The power of Detecto comes from its simplicity and ease of use. Creating and running a pre-trained Faster R-CNN ResNet-50 FPN from PyTorch’s model zoo takes 4 lines of code: from detecto.core import Model from detecto.visualize import detect_video model = Model() detect_video(model, 'input.mp4', 'output.avi') Below are several more examples of things you can do with Detecto: Transfer Learning on Custom Datasets Most of the times, you want a computer vision model that can detect custom objects. With Detecto, you can train a model on a custom dataset with 5 lines of code: from detecto.core import Model, Dataset dataset = Dataset('custom_dataset/'):
https://detecto.readthedocs.io/en/latest/index.html
2021-11-27T14:48:37
CC-MAIN-2021-49
1637964358189.36
[]
detecto.readthedocs.io
Viewing canary statistics and details You can view details about your canaries and see statistics about their runs. To be able to see all the details about your canary run results, you must be logged on to an account that has sufficient permissions. For more information, see Required roles and permissions for CloudWatch canaries. To view canary statistics and details Open the CloudWatch console at . In the navigation pane, choose Canaries. In the details about the canaries that you have created: Status visually shows how many of your canaries have passed their most recent runs. In the graph under Canary runs, each point represents one minute of your canaries’ runs. You can pause on a point to see details. Near the bottom of the page is a table displaying all canaries. A column on the right shows the alarms created for each canary. Only alarms that conform to the naming standard for canary alarms are displayed. This standard is Synthetics-Alarm-. Canary alarms that you create in the Synthetics section of the CloudWatch console automatically use this naming convention. If you create canary alarms in the Alarms section of the CloudWatch console or by using AWS CloudFormation, and you don't use this naming convention, the alarms work but they do not appear in this list. canaryName- index To see more details about a single canary, choose a point in the Status graph or choose the name of the canary in the Canaries table. In the details about that canary: The Availability tab displays information about the recent runs of this canary. Under Canary runs, you can choose one of the lines to see details about that run. Under the graph, you can choose Links checked, Screenshot, HAR file, or Logs to see these types of details. If the canary has active tracing enabled, you can also choose Traces to see tracing information from the canary's runs. The logs for canary runs are stored in S3 buckets and in CloudWatch Logs. Screenshots show how your customers view your webpages. You can use the HAR files (HTTP Archive files) to view detailed performance data about the webpages. You can analyze the list of web requests and catch performance issues such as time to load for an item. Log files show the record of interactions between the canary run and the webpage and can be used to identify details of errors. If the canary uses the syn-nodejs-2.0-betaruntime or later, you can sort the HAR files by status code, request size, or duration. If the canary uses the syn-nodejs-2.0-betaruntime or later and the canary executes steps in its script, you can choose a Steps tab. This tab displays a list of the canary's steps, each step's status, failure reason, URL after step execution, screenshots, and duration of step execution. For API canaries with HTTP steps, you can view steps and corresponding HTTP requests if you are using runtime syn-nodejs-2.2or later. Choose the HTTP Requests tab to view the log of each HTTP request made by the canary. You can view request/response headers, response body, status code, error and performance timings (total duration, TCP connection time, TLS handshake time, first byte time, and content transfer time). All HTTP requests which use the HTTP/HTTPS module under the hood are captured here. By default in API canaries, the request header, response header, request body, and response body are not included in the report for security reasons. If you choose to include them, the data is stored only in your S3 bucket. For information about how to include this data in the report, see executeHttpStep(stepName, requestOptions, [callback], [stepConfig]). Response body content types of text, HTML and JSON are supported. Content types like text/HTML, text/plain, application/JSON and application/x-amz-json-1.0 are supported. Compressed responses are not supported. The Monitoring tab displays graphs of the CloudWatch metrics published by this canary. For more information about these metrics, see CloudWatch metrics published by canaries. Below the CloudWatch graphics published by the canary are graphs of Lambda metrics related to the canary's Lambda code. The Configuration tab displays configuration and schedule information about the canary. The Tags tab displays the tags associated with the canary.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries_Details.html
2021-11-27T15:44:43
CC-MAIN-2021-49
1637964358189.36
[]
docs.aws.amazon.com
To enable the Ranger HDFS plugin on a Kerberos-enabled cluster, perform the steps described below. Create the system (OS) user rangerhdfslookup. Make sure this user is synced to Ranger Admin (under Settings>Users/Groups tab in the Ranger Admin User Interface). Create a Kerberos principal for rangerhdfslookupby entering the following command: kadmin.local -q 'addprinc -pw rangerhdfslookup [email protected] Navigate to the HDFS service. Click the Config tab. Navigate to advanced ranger-hdfs-plugin-properties and update the properties listed in the table shown below. After updating these properties, click Save and restart the HDFS service.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.4.0/bk_Security_Guide/content/hdfs_plugin_kerberos.html
2021-11-27T13:44:15
CC-MAIN-2021-49
1637964358189.36
[]
docs.cloudera.com
How to create a Kafka consumer How to create a Kafka producer Jaeger setup for microservices Mock integrations Plugins What is a plugin? Custom plugins Plugins Setup guide Documents plugin What is the document management plugin? Prerequisites Using the plugin Notifications plugin What is the notifications plugin? Prerequisites Using the plugin FAQs GitBook Creating a custom integration Creating integrations for the FLOWX platform is pretty straightforward, you can use your preferred technology in order to write the custom code for them. The only constraint is that they need to be able to send and receive message to / from the Kafka cluster. Steps for creating an integration Create a Microservice (we'll refer to it as an Adapter) that can listen for and react to Kafka events, using your preferred tech stack. Add custom logic for handling the received data from Kafka and obtaining related info from legacy systems. And finally, send the data back to Kafka. Here's the startup code for a Java Connector Microservice: GitHub - flowx-ai/quickstart-connector GitHub Add the related configuration in the process definition, you will have to add a message send action in one of the node in order to send the needed data to the adapter. When the response from the custom integration is ready, send it back to the engine, keep in mind, your process will wait in a receive message node. How to manage Kafka Topics Don't forget, the engine is configured to consume all the events on topics that start with a predefined topic path (ex. flowx.in.*) you will need to configure this topic pattern when setting up the Engine and make sure to use it when sending messages from the Adapters back to the Engine. all Kafka headers received by the Adapter should be sent back to the Engine with the result. How to build and Adapter Adapters should act as a light business logic layer that: Converts data from one domain to another (date formats, list of values, units etc) Adds information that are required by the integration but are not important for the process (a flag, generates a GUID for tracing etc) How to create a Kafka consumer How to create a Kafka producer Keep in mind that your are in an event driven architecture and the communication between the engine and the adapter is asyncronous. The adapters will need to be designed in such a way that they do not bloat the platform. Depending on the communication type between the adaptor and the legacy system you might need to also add custom implementation for load balancing requests, scaling the adapter and such. In order to be able to easily trace the process flow, a minimal setup for Jaeger tracing should be added to custom Adapters. Jaeger setup for microservices Integrations - Previous What is an integration? How to create a Kafka consumer Last modified 17d ago Copy link Contents Steps for creating an integration How to manage Kafka Topics How to build and Adapter
https://docs.flowx.ai/integrations/creating-a-custom-integration
2021-11-27T15:16:16
CC-MAIN-2021-49
1637964358189.36
[]
docs.flowx.ai
- Automate - The automate option allows you to approve statistics for collection by collect jobs. You can determine whether to automate statistics on a single object or table, multiple objects or tables, entire databases, or entire systems. New statistics collected on a system outside of the Stats Manager portlet are not automatically approved for collection by collect jobs. To approve new statistics for collection, set the automate options that are available on the Statistics tab. - Collect - A collect job submits COLLECT STATISTICS statements to the Teradata Database for statistics that are approved for automation. You can control the scope and schedule of each collect job, as well as assign a priority to individual COLLECT STATISTICS statements. Running a collect job can be resource-intensive on the Teradata Database so you may want to run it during periods of lower activity.
https://docs.teradata.com/r/GsdbDDAzsO9o7v49kofuFw/4opdVCgkMfUT5HnPTvFvNQ
2021-11-27T14:37:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.teradata.com
In this step, the cloud administrator adds three cloud zones, one each for development, testing, and production. Cloud zones are the resources onto which the project will deploy the machines, networks, and storage to support the WordPress site. Prerequisites Add cloud accounts. See WordPress use case: add cloud accounts. Procedure - Go to . - Click New Cloud Zone, and enter values for the development environment. Remember that all values are only use case samples. Your zone specifics will vary. - Click Compute, and verify that the zones you expect are there. - Click Create. - Repeat the process twice, with values for the test and production environments. What to do next Account for different size machine deployments by adding flavor mappings. See WordPress use case: add flavor mappings.
https://docs.vmware.com/en/vRealize-Automation/8.1/Using-and-Managing-Cloud-Assembly/GUID-9DB7610D-9179-419B-9605-539F625A81D1.html
2021-11-27T15:25:36
CC-MAIN-2021-49
1637964358189.36
[]
docs.vmware.com
Inserting variables into pages with glue¶ You often wish to run analyses in one notebook and insert them into your documents text elsewhere. For example, if you’d like to include a figure, or if you want to cite a statistic that you have run. The glue submodule allows you to add a key to variables in a notebook, then display those variables in your book by referencing the key. This page describes how to add keys to variables in notebooks, and how to insert them into your book’s content in a variety of ways.1 cell). Choose a key that you will remember, as you will use it later. The following code glues a variable inside the notebook: from myst_nb import glue a = "my variable!" glue("my_variable", a) 'my variable!' You can then insert it into your text like so: 'my variable!'. That was accomplished with the following code: {glue:}`my_variable`.) # Calcualte the 95% confidence interval for the mean clo, chi = np.percentile(means, [2.5, 97.5]) # Store the values in our notebook glue("boot_mean", means.mean()) glue("boot_clo", clo) glue("boot_chi", chi) 2.999170526384387 2.98664162917544 3.0114864783550366 parts of cells). Pasting glued variables into your page.999170526384387, and figure; . Tip We recommend using wider, shorter figures when plotting in-line, with a ratio around 6x2. For example, here’s 2.999170526384387 (95% confidence interval 2.98664162917544/3.0114864783550366): Which we reference as Equation (1). Note glue:math only works with glued variables that contain a text/latex output. Advanced glue usecases¶ Here are a few more specific and advanced uses of the glue submodule. Pasting from pages you don’t include in the documentation¶ Sometimes you’d like to use variables from notebooks that are not meant to be shown to users. In this case, you should bundle the notebook with the rest of your content pages, but include orphan: in the metadata of the notebook. For example, the following text: {glue:}`orphaned_var` was created in {ref}`orphaned-nb`. Results in: 'My orphaned variable!' was created in An orphaned notebook Pasting into tables¶ In addition to pasting blocks of outputs, or in-line with text, you can also paste directly into tables. This allows you to compose complex collections of structured data using outputs that were generated in other notebooks. For example the following: - 1 This notebook can be downloaded as glue.ipynband glue.md
https://myst-nb.readthedocs.io/en/v0.12.3/use/glue.html
2021-11-27T15:15:22
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/glue_10_2.png', '../_images/glue_10_2.png'], dtype=object) array(['../_images/glue_10_3.png', '../_images/glue_10_3.png'], dtype=object) array(['../_images/glue_10_0.png', '../_images/glue_10_0.png'], dtype=object) ]
myst-nb.readthedocs.io
- application - Handle to the UIApplication. - userInfo - Documentation for this section has not yet been entered. - completionHandler - Callback to invoke to notify the operating system of the result of the background fetch operation. This method is part of iOS 7.0 new remote notification support. This method is invoked if your Entitlements list the "remote-notification" background operation is set, and you receive a remote notification. Upon completion, you must notify the operating system of the result of the method by invoking the provided callback. Important: failure to call the provided callback method with the result code before this method completes will cause your application to be terminated.
http://docs.go-mono.com/monodoc.ashx?link=M%3AUIKit.UIApplicationDelegate.DidReceiveRemoteNotification
2021-11-27T15:23:50
CC-MAIN-2021-49
1637964358189.36
[]
docs.go-mono.com
About Background Dr. Megan Jack is a board-certified plastic surgeon in Knoxville, TN. She originally hails from the Midwest, being born and raised in Kansas City where she attended medical school at the University of Kansas. She then went on to a General Surgery residency at the University of North Carolina followed by a Plastic Surgery fellowship at the esteemed Cleveland Clinic Florida. She spent her first 8 years in private practice in south Florida and now is serving East Tennessee with her cosmetic and reconstructive skills. Philosophy Dr. Jack has a relaxed, friendly personality; but don't downplay her great eye for aesthetics. She has a great rapport with patients & one of her main priorities is making patients feel comfortable by addressing all of their concerns & leaving no question unanswered. Dr. Jack is an extremely detail-oriented person who is never happy with less than optimal results & her philosophy is that beauty is a package deal, starting from the inside and finishing on the out. She believes that plastic surgery helps to restore and/or enhance a person's natural beauty and wants her patients to know that natural looking results are possible! As a working mom of 2 young girls, Dr. Jack understands what it feels like to have your body change and not feel like you have the time to care for yourself---which is why she wants to help you prioritize self-care, restore confidence in your body, and help you feel confident and comfortable in your own body. Her goal in summary: help you STAY BEAUTIFUL! She specializes in breast augmentation, breast lift, breast reduction, Mommy Makeover, liposuction and liposculpture, tummy tuck, skin cancer reconstruction, and is an expert injector of Botox and fillers. She is the only board-certified plastic surgeon in private practice in Knoxville, TN. Board certification: American Board of Plastic Surgery Hospital affiliation: Park West Medical Center
https://app.uber-docs.com/Specialists/SpecialistProfile/Megan-Jack-MD/East-Tennessee-Plastic-Surgery
2021-11-27T14:22:54
CC-MAIN-2021-49
1637964358189.36
[]
app.uber-docs.com
Moving an Item FlexNet Manager Suite 2020 R2 (On-Premises) You can select one item at a time to move. If the item has any child items attached to it in the Inventory details tree, these automatically move with their parent. To move an item: - If necessary, navigate to the item in the tree, expanding higher nodes as required to expose the item you want to move. - Do one of the following: The Move simulation item dialog appears, showing the list of simulation items that are suitable to parent the object you are moving. - In the Actions column at the right of that row, click the move button (up/down icon) - At the left end of the row, select the check box, and click Move in the tool bar above the simulation tree. (You can select only one row for a move, although all its descendent items will move with it.) - Select the new parent for the item you are moving.Tip: If the list is long, you may use the search field in this dialog to find a new parent item. The search covers all fields, including those available in the Field Chooser for this dialog (available through Advanced). If you choose to display an extra field, be sure to click Refresh to gather the additional data values. - Click OK.
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/tasks/Sim-MovingItem.html
2021-11-27T15:22:32
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
scriptharness.log module¶ The goal of full logging is to be able to debug problems purely through the log. - class scriptharness.log. LogMethod(func=None, **kwargs)¶ Bases: object Wrapper decorator object for logging and error detection. This is here as a shortcut to wrap functions with basic logging. default_config¶ dict contains the config defaults that can be overridden via __init__ kwargs. Changing default_config directly may carry over to other decorated LogMethod functions! __call__(func, *args, **kwargs)¶ Wrap the function call as a decorator. When there are decorator arguments, __call__ is only called once, at decorator time. args and kwargs only show up when func is called, so we need to create and return a wrapping function. default_config= {u'post_success_msg': u'%(func_name)s completed.', u'error_level': 40, u'post_failure_msg': u'%(func_name)s failed.', u'logger_name': u'scriptharness.{func_name}', u'level': 20, u'pre_msg': u'%(func_name)s arguments were: %(args)s %(kwargs)s', u'exception': None, u'detect_error_cb': None} post_func()¶ Log the success message until we get an error detection callback. This method is split out for easier subclassing. pre_func()¶ Log the function call before proceeding. This method is split out for easier subclassing. set_repl_dict()¶ Create a replacement dictionary to format strings. The log messages in pre_func() and post_func() require some additional info. Specify that info in the replacement dictionary. Currently, set the following: func_name: self.func.__name__ *args: the args passed to self.func() **kwargs: the kwargs passed to self.func() After running self.func, we’ll also set return_value. - class scriptharness.log. OutputBuffer(logger, pre_context_lines, post_context_lines)¶ Bases: object Buffer output for context lines: essentially, an error_check can set the level of X lines in the past or Y lines in the future. If multiple error_checks set the level for a line, currently the higher level wins. For instance, if a make: *** [all] Error 2sets the level to logging.ERROR for 10 pre_context_lines, we’ll need to buffer at least 10 lines in case we hit that error. If a second error_check sets the level to logging.WARNING 5 lines above the make: *** [all] Error 2, the ERROR wins out, and that line is still marked as an ERROR. This restricts the buffer size to pre_context_lines. In years past I’ve also ordered Visual Studio output by thread, and set the error all the way up until we match some other pattern, so the buffer had to grow to an arbitrary size. Those could be represented by separate classes/subclasses if needed. pop_buffer(num=1)¶ Pop num lines from the front of the buffer and log them at the level set for each line. - class scriptharness.log. OutputParser(error_list, logger=None, **kwargs)¶ Bases: object Helper object to parse command output. add_buffer(level, messages, error_check=None)¶ Add the line to self.context_buffer if it exists, otherwise log it. - class scriptharness.log. UnicodeFormatter(fmt=None, datefmt=None)¶ Bases: logging.Formatter Subclass logging.Formatter to handle unicode strings in py2. encoding= u'utf-8' scriptharness.log. get_console_handler(formatter=None, logger=None, level=20)¶ Create a stream handler to add to a logger. scriptharness.log. get_file_handler(path, level=20, formatter=None, logger=None, mode=u'w')¶ Create a file handler to add to a logger. scriptharness.log. get_formatter(fmt=u'%(asctime)s %(levelname)8s - %(message)s', datefmt=u'%H:%M:%S')¶ Create a unicode-friendly formatter to add to logging handlers. scriptharness.log. prepare_simple_logging(path, mode=u'w', logger_name=u'', level=20, formatter=None)¶ Create a unicode-friendly logger. By default it’ll create the root logger with a console handler; if passed a path it’ll also create a file handler. Both handlers will have a unicode-friendly formatter. This function is intended to be called a single time. If called a second time, beware creating multiple console handlers or multiple file handlers writing to the same file.
https://python-scriptharness.readthedocs.io/en/stable/scriptharness.log/
2021-11-27T14:24:03
CC-MAIN-2021-49
1637964358189.36
[]
python-scriptharness.readthedocs.io
Alpha Diversity What is alpha diversity? Alpha diversity is used to measure the diversity within a sample and answers the question "how many?". It allows you to look at number of different taxa within each sample separately. If a sample has high alpha diversity it contains many organisms. There are several methods that can be used to look at and understand alpha diversity. When talking about alpha diversity, we are looking at two things: Species richness - a count of the number of different species present in a sample. It does not take into account the abundance of the species or their relative distributions. Species evenness - a measure of relative abundance of different species that make up the richness. Measures for evenness and richness: The input metric for Alpha Diversity is Abundance Score. Abundance score is a normalized abundance metric that reflects the underlying microbiome composition of the community. The aggregation level of the input data for Comparative Analysis has been set to species level. CHAO1 index CHAO1 index - appropriate for abundance data - assumes that the number of organisms identified for a taxa has a poisson distribution and corrects for variance. It is useful for data sets skewed toward low-abundance calls, as if often the case with microbes. Here you can see a dot plot using the CHAO1 index: Simpson The Simpson diversity index is used to calculate a measure of diversity taking into account the number of taxa as well as the abundance. The simpson index gives more weight to common or dominant species which means a few rare species with only a few representatives will not affect the diversity of the sample. Here you see the same dataset as above but with the Simpson index: Shannon The Shannon index summarizes the diversity in the population while assuming all species are represented in a sample and they are randomly sampled. The Shannon index increases as both the richness and evenness of the community increase. Here is the same dot plot as in the previous two examples but calculated using the Shannon index: Box plots Additionally, if you scroll down below your dot plot, you will see a box plot for each sample cohort selected using labels when creating comparative analysis: Wilcoxon rank sum test statistics How does this test work and what do the results mean? Above the alpha diversity chart the results of a Wilcoxon rank sum test can be explored in a table. This nonparametric statistical test can be used to investigate whether two independent cohorts consist of samples that were selected from populations having the same alpha diversity distribution. The null hypothesis thereby is that the probability that a randomly selected value from one cohort is less than a randomly selected value from a second cohort is equal to the probability of being greater. P-values below e.g. 0.05 suggest that the null hypothesis can be rejected, confirming that the samples of two cohorts are selected from populations with different alpha diversity distributions. Test results are shown in table listing the test statistic and p-value for the different possible cohort combinations. A negative (positive) test statistic for a cohort pair Cohort 1 ↔︎ Cohort 2 thereby means that the median alpha diversity of Cohort 1 is lower (higher) than the median alpha diversity for Cohort Viewing the test results Above and to the right of the alpha diversity charts you can find a Result Switcher and a Cohort Menu. The Result Switcher allows you to hide the results (using the NONE setting, which is selected by default). Clicking on SIGNIFICANT will limit the results to only those cohort combinations for which alpha diversity differs with statistical significance, i.e. for which the p-value is equal or lower than 0.05. The ALL setting displays test statistics and p-values for all. It is possible to export the test results to TSV. Updated 4 months ago
https://docs.cosmosid.com/docs/alpha-diversity
2021-11-27T15:18:10
CC-MAIN-2021-49
1637964358189.36
[array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/jkuP54OO/90e82b3d-7b4c-4727-917e-ca95fea0d6c4.jpg?v=91bb22b3c8631f8c6a6d72a875d6b3f6', None], dtype=object) array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/2Nuq4lbO/82e9ae7b-567d-4efa-a065-b66fb9c824d9.jpg?v=ffe58119788e239609e1b474ddab66f6', None], dtype=object) array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/JruD2xem/bc5e1595-1d5f-469d-ba59-f70bc916f0d3.jpg?v=629f29044822a9e3d5cf88bb64844d6d', None], dtype=object) array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/jkuP546b/8c9a98f6-887c-4af0-a660-922d575f6b82.jpg?v=106cab22e7f79156bcb5dedeffc5a54c', None], dtype=object) ]
docs.cosmosid.com
Date: Mon, 29 Jan 2001 18:07:42 -0500 From: "Peter Brezny" <[email protected]> To: <[email protected]> Subject: piping dump output to mail user. Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I was hoping to be able to mail what dump would normally output to the screen with a command like this in crontab dump (variables) | mail [email protected] Although the dumps run, I don't get anything in the mail. Ideas? Thanks in advance. Peter Brezny SysAdmin Services Inc. To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=681207+0+/usr/local/www/mailindex/archive/2001/freebsd-questions/20010204.freebsd-questions
2021-11-27T15:26:09
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
the → menu item transaction. If you haven't entered the payee or payer, KMyMoney will offer you the opportunity to do this in the middle of entering the schedule. To add other new transactions go to Ledgers; you can add new payees and categories in the middle of a transaction or by going to Payees or Categories before entering the transaction. You will probably find that the default Categories do not exactly match your needs; you can easily delete ones you know you are never going to need and add others that you need. But when you are entering a transaction, you only have to type a few letters of a category and KMyMoney will offer you a drop down list of the matching categories from which to choose. You can add different accounts managed by different institutions; the preferred one will show when you open KMyMoney but you can quickly switch to any of the others. When you make a payment, KMyMoney will work out what the next check number should be; delete this if you are not making a check payment or edit it if the first check you enter is not check number 1. Alternatively, it is possible to switch off auto-increment of check numbers. Every so often you may get statements of your account from the institutions you use; you can reconcile your KMyMoney accounts against these statements so that you have an accurate statement of the current state of your finances. If you go to Reports, you will find several default reports; to customize these, open one similar to the sort you prefer and then select 'New' (not 'Copy'); you can then customize this to your needs and mark it as a preferred report if you wish. Though KMyMoney is not intended for use in a business context, if you are running a business on your own and so do not need payroll functions, you will probably find that KMyMoney is sufficiently customizable to meet your needs particularly as it comes with budgeting and forecasting features and you can export your customized reports via CSV into other applications.
https://docs.kde.org/trunk5/en/kmymoney/kmymoney/makingmostof.usefultips.html
2021-11-27T14:10:36
CC-MAIN-2021-49
1637964358189.36
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
cmd Starts To use multiple commands for <string>, separate them by the command separator && and enclose them in quotation marks. For example: "<command1>&&<command2>&&<command3>" If you specify /c or /k, cmd processes, the remainder of string, and the quotation marks are preserved only if all of the following conditions are met: You don't also use /s. You use exactly one set of quotation marks. You don't use any special characters within the quotation marks (for example: & < > ( ) @ ^ | ). You use one or more white-space characters within the quotation marks. The string within quotation marks is the name of an executable file. If the previous conditions aren't met, string is processed by examining the first character to verify whether it is an opening quotation mark. If the first character is an opening quotation mark, it is stripped along with the closing quotation mark. Any text following the closing quotation marks is preserved. If you don're executed before all other variables. Caution Incorrectly editing the registry may severely damage your system. Before making changes to the registry, you should back up any valued data on the computer. You can disable command extensions) If you enable delayed environment variable expansion, you can use the exclamation point character to substitute the value of an environment variable at run time.. Caution Incorrectly editing the registry may severely damage your system. Before making changes to the registry, you should back up any valued data on the computer.. Pressing CTRL+D or CTRL+F, processes the file and directory name completion. These key combination functions append a wildcard character to string (if one is not present), builds a list of paths that match, and then displays the first matching path. If none of the paths match, the file and directory name completion function beeps and does not change the display. To move through the list of matching paths, press CTRL+D or CTRL+F repeatedly. To move through the list backwards, press the SHIFT key and CTRL+D or CTRL+F simultaneously. To discard the saved list of matching paths and generate a new list, edit string and press CTRL+D or CTRL+F. If you switch between CTRL+D and CTRL+F, the saved list of matching paths is discarded and a new list is generated. The only difference between the key combinations CTRL+D and CTRL+F is that CTRL+D only matches directory names and CTRL+F matches both file and directory names. If you use file and directory name completion on any of the built-in directory commands (that is, CD, MD, or RD), directory completion is assumed. File and directory name completion correctly processes file names that contain white space or special characters if you place quotation marks around the matching path. You must use quotation marks around the following special characters: & < > [ ] | { } ^ = ; ! ' + , ` ~ [white space]. If the information that you supply contains spaces, you must use quotation marks around the text (for example, "Computer Name"). If you process file and directory name completion from within string, any part of the path to the right of the cursor is discarded (at the point in string where the completion was processed).
https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/cmd
2021-11-27T15:33:07
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Smart Plugins¶ Pyrogram embeds a smart, lightweight yet powerful plugin system that is meant to further simplify the organization of large projects and to provide a way for creating pluggable (modular) components that can be easily shared across different Pyrogram applications with minimal boilerplate code. Tip Smart Plugins are completely optional and disabled by default. Contents Introduction¶ Prior to the Smart Plugin system, pluggable handlers were already possible. For example, if you wanted to modularize your applications, you had to put your function definitions in separate files and register them inside your main script after importing your modules, like this: Note This is an example application that replies in private chats with two messages: one containing the same text message you sent and the other containing the reversed text message. Example: “Pyrogram” replies with “Pyrogram” and “margoryP” myproject/ config.ini handlers.py main.py handlers.py def echo(client, message): message.reply(message.text) def echo_reversed(client, message): message.reply(message.text[::-1]) main.py from pyrogram import Client, filters from pyrogram.handlers import MessageHandler from handlers import echo, echo_reversed app = Client("my_account") app.add_handler( MessageHandler( echo, filters.text & filters.private)) app.add_handler( MessageHandler( echo_reversed, filters.text & filters.private), group=1) app.run() This is already nice and doesn’t add too much boilerplate code, but things can get boring still; you have to manually import, manually add_handler() and manually instantiate each MessageHandler object because you can’t use those cool decorators for your functions. So, what if you could? Smart Plugins solve this issue by taking care of handlers registration automatically. Using Smart Plugins¶ Setting up your Pyrogram project to accommodate Smart Plugins is pretty straightforward: Create a new folder to store all the plugins (e.g.: “plugins”, “handlers”, …). Put your python files full of plugins inside. Organize them as you wish. Enable plugins in your Client or via the config.ini file. Note This is the same example application as shown above, written using the Smart Plugin system. myproject/ plugins/ handlers.py config.ini main.py plugins/handlers.py from pyrogram import Client, filters @Client.on_message(filters.text & filters.private) def echo(client, message): message.reply(message.text) @Client.on_message(filters.text & filters.private, group=1) def echo_reversed(client, message): message.reply(message.text[::-1]) config.ini [plugins] root = plugins main.py from pyrogram import Client Client("my_account").run() Alternatively, without using the config.ini file: from pyrogram import Client plugins = dict(root="plugins") Client("my_account", plugins=plugins).run() The first important thing to note is the new plugins folder. You can put any python file in any subfolder and each file can contain any decorated function (handlers) with one limitation: within a single module (file) you must use different names for each decorated function. The second thing is telling Pyrogram where to look for your plugins: you can either use the config.ini file or the Client parameter “plugins”; the root value must match the name of your plugins root folder. Your Pyrogram Client instance will automatically scan the folder upon starting to search for valid handlers and register them for you. Then you’ll notice you can now use decorators. That’s right, you can apply the usual decorators to your callback functions in a static way, i.e. without having the Client instance around: simply use @Client (Client class) instead of the usual @app (Client instance) and things will work just the same. Specifying the Plugins to include¶ By default, if you don’t explicitly supply a list of plugins, every valid one found inside your plugins root folder will be included by following the alphabetical order of the directory structure (files and subfolders); the single handlers found inside each module will be, instead, loaded in the order they are defined, from top to bottom. Note Remember: there can be at most one handler, within a group, dealing with a specific update. Plugins with overlapping filters included a second time will not work. Learn more at More on Updates. This default loading behaviour is usually enough, but sometimes you want to have more control on what to include (or exclude) and in which exact order to load plugins. The way to do this is to make use of include and exclude directives, either in the config.ini file or in the dictionary passed as Client argument. Here’s how they work: If both includeand excludeare omitted, all plugins are loaded as described above. If includeis given, only the specified plugins will be loaded, in the order they are passed. If excludeis given, the plugins specified here will be unloaded. The include and exclude value is a list of strings. Each string containing the path of the module relative to the plugins root folder, in Python notation (dots instead of slashes). E.g.: subfolder.modulerefers to plugins/subfolder/module.py, with root="plugins". You can also choose the order in which the single handlers inside a module are loaded, thus overriding the default top-to-bottom loading policy. You can do this by appending the name of the functions to the module path, each one separated by a blank space. E.g.: subfolder.module fn2 fn1 fn3will load fn2, fn1 and fn3 from subfolder.module, in this order. Examples¶ Given this plugins folder structure with three modules, each containing their own handlers (fn1, fn2, etc…), which are also organized in subfolders: myproject/ plugins/ subfolder1/ plugins1.py - fn1 - fn2 - fn3 subfolder2/ plugins2.py ... plugins0.py ... ... Load every handler from every module, namely plugins0.py, plugins1.py and plugins2.py in alphabetical order (files) and definition order (handlers inside files): Using config.ini file: [plugins] root = plugins Using Client’s parameter: plugins = dict(root="plugins") Client("my_account", plugins=plugins).run() Load only handlers defined inside plugins2.py and plugins0.py, in this order: Using config.ini file: [plugins] root = plugins include = subfolder2.plugins2 plugins0 Using Client’s parameter: plugins = dict( root="plugins", include=[ "subfolder2.plugins2", "plugins0" ] ) Client("my_account", plugins=plugins).run() Load everything except the handlers inside plugins2.py: Using config.ini file: [plugins] root = plugins exclude = subfolder2.plugins2 Using Client’s parameter: plugins = dict( root="plugins", exclude=["subfolder2.plugins2"] ) Client("my_account", plugins=plugins).run() Load only fn3, fn1 and fn2 (in this order) from plugins1.py: Using config.ini file: [plugins] root = plugins include = subfolder1.plugins1 fn3 fn1 fn2 Using Client’s parameter: plugins = dict( root="plugins", include=["subfolder1.plugins1 fn3 fn1 fn2"] ) Client("my_account", plugins=plugins).run() Load/Unload Plugins at Runtime¶ In the previous section we’ve explained how to specify which plugins to load and which to ignore before your Client starts. Here we’ll show, instead, how to unload and load again a previously registered plugin at runtime. Each function decorated with the usual on_message decorator (or any other decorator that deals with Telegram updates) will be modified in such a way that a special handler attribute pointing to a tuple of (handler: Handler, group: int) is attached to the function object itself. plugins/handlers.py @Client.on_message(filters.text & filters.private) def echo(client, message): message.reply(message.text) print(echo) print(echo.handler) Printing echowill show something like <function echo at 0x10e3b6598>. Printing echo.handlerwill reveal the handler, that is, a tuple containing the actual handler and the group it was registered on (<MessageHandler object at 0x10e3abc50>, 0). Unloading¶ In order to unload a plugin, all you need to do is obtain a reference to it by importing the relevant module and call remove_handler() Client’s method with your function’s handler special attribute preceded by the star * operator as argument. Example: main.py from plugins.handlers import echo ... app.remove_handler(*echo.handler) The star * operator is used to unpack the tuple into positional arguments so that remove_handler will receive exactly what is needed. The same could have been achieved with: handler, group = echo.handler app.remove_handler(handler, group) Similarly to the unloading process, in order to load again a previously unloaded plugin you do the same, but this time using add_handler() instead. Example: main.py from plugins.handlers import echo ... app.add_handler(*echo.handler)
https://docs.pyrogram.org/topics/smart-plugins
2021-11-27T15:18:56
CC-MAIN-2021-49
1637964358189.36
[]
docs.pyrogram.org
] Deletes specified tags from a. Currently, the supported resource is an Amazon ECR Public repository. --tag-keys (list) The keys of the tags to be removed. .
https://docs.aws.amazon.com/cli/latest/reference/ecr-public/untag-resource.html
2021-11-27T16:14:32
CC-MAIN-2021-49
1637964358189.36
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-EC2VolumeStatus-VolumeId <String[]>-Filter <Filter[]>-MaxResult <Int32>-NextToken <String>-Select <String>-PassThru <SwitchParameter>-NoAutoIteration <SwitchParameter> DescribeVolumeStatusoperation provides the following information about the specified volumes: Status: Reflects the current status of the volume. The possible values are ok, impaired, warning, or insufficient-data. If all checks pass, the overall status of the volume is ok. If the check fails, the overall status is impaired. If the status is insufficient-data, then the checks might still be taking place on your volume at the time. We recommend that you retry the request. For more information about volume status, see Monitor the status of your volumes in the Amazon Elastic Compute Cloud User Guide. Events: Reflect the cause of a volume status and might require you to take action. For example, if your volume returns an impairedstatus, then the volume event might be potential-data-inconsistency. This means that your volume has been affected by an issue with the underlying host, has all I/O operations disabled, and might have inconsistent data. Actions: Reflect the actions you might have to take in response to an event. For example, if the status of the volume is impairedand the volume event shows potential-data-inconsistency, then the action shows enable-volume-io. This means that you may want to enable the I/O operations for the volume by calling the EnableVolumeIO action and then check the volume for data consistency. Volume status is based on the volume status checks, and does not reflect the volume state. Therefore, volume status does not indicate volumes in the errorstate (for example, when a volume is incapable of accepting I/O.)). DescribeVolumeStatusin paginated output. When this parameter is used, the request only returns MaxResultsresults in a single page along with a NextTokenresponse element. The remaining results of the initial request can be seen by sending another request with the returned NextTokenvalue. This value can be between 5 and 1,000; if MaxResultsis given a value larger than 1,000, only 1,000 results are returned. If this parameter is not used, then DescribeVolumeStatusreturns all results. You cannot specify this parameter and the volume IDs parameter in the same request. NextTokenvalue to include in a future DescribeVolumeStatusrequest. When the results of the request exceed MaxResults, this value can be used to retrieve the next page of results. This value is nullwhen there are no more results to return. Get-EC2VolumeStatus -VolumeId vol-12345678 Actions : {} AvailabilityZone : us-west-2a Events : {} VolumeId : vol-12345678 VolumeStatus : Amazon.EC2.Model.VolumeStatusInfo (Get-EC2VolumeStatus -VolumeId vol-12345678).VolumeStatus Details Status ------- ------ {io-enabled, io-performance} ok (Get-EC2VolumeStatus -VolumeId vol-12345678).VolumeStatus.Details Name Status ---- ------ io-enabled passed io-performance not-applicableThis example describes the status of the specified volume. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2VolumeStatus.html
2021-11-27T14:24:18
CC-MAIN-2021-49
1637964358189.36
[]
docs.aws.amazon.com
Date: Tue, 2 Feb 2016 05:40:56 +0000 From: Team BankBazaar <[email protected]> To: [email protected] Subject: Getting A Personal Loan Has Never Been Easier Message-ID: <00000152a07f7e57-5f9560f4-edd9-490f-84bc-41d1208a66d9-000000@email.amazonses.com> Next in thread | Raw E-Mail | Index | Archive | Help Thinking about a Personal Loan? Here are the questions to ask. Why a Personal Loan over others? Simple paperwork, fast approval and absence of collateral. Need we say more? Does a Salary Account help? Yes, you get a preferential rate of interest from the bank. Very nice, no? The amount I can borrow, please? This would depend on your current income and your ability to repay. Cool? How about repayment? You need to pay back in equal monthly installments (EMIs) which is a great option since it breaks down the entire amount into smaller chunks. Easy, right? It’s pretty clear that Personal Loans are designed to help you in your times of need. Ready to apply? Get A Personal Loan < &client=13035> Cheers, Team BankBazaar Opt-out < &client=13035> Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=124917+0+/usr/local/www/mailindex/archive/2016/freebsd-ports/20160207.freebsd-ports
2021-11-27T15:44:05
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Reset Toolbars Versions: 2008,2010 Published: 7/27/2010 Code: vstipEnv0029 You can reset any toolbar to its default settings. Just click on the drop-down to the right of any toolbar then click on "Add or Remove Buttons": Click on "Reset Toolbar": You will get a dialog like this: Click "Yes" and it will remove any custom buttons and reset the toolbar to its default settings.
https://docs.microsoft.com/en-us/archive/blogs/zainnab/reset-toolbars
2021-11-27T14:06:58
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
In this topic, you can find information on viewing MnR gateway traffic flow collector reports. Prerequisites To start the data flow, you must start and deploy the MnR traffic flow collector, see the Collector in VMware Telco Cloud Operations Configuration Guide for more information. Procedure - Go to http(s):// VMware Telco Cloud Operations MnR Gateway Traffic Flow Report, click TCOps MnR Traffic Flow.
https://docs.vmware.com/en/VMware-Telco-Cloud-Operations/1.0/vmware-tco-10-user-guide/GUID-D4E3A174-F894-4FD7-B288-9FBC0F86063F.html
2021-11-27T15:44:58
CC-MAIN-2021-49
1637964358189.36
[]
docs.vmware.com
Date: Fri, 15 Jan 1999 14:51:46 +0100 From: "Francois E Jaccard" <[email protected]> To: <[email protected]> Message-ID: <000001be408e$3169bee0$0100a8c0@sicel-3-213> Next in thread | Raw E-Mail | Index | Archive | Help Hi, I would like to install FreeBSD 3.0-RELEASE (CD from cdrom.com) on a 6.4Gb Fuji UDMA HD but the problem is that my PC does not support UDMA. I get begin the install but I get "Interrupt timeout" after a time (at 91% into installing /bin) I also have a Promise ULTRA/33 controller that IS detected at boot but if I put the HD on it, it is not detected. The PC is a PPro 200 on an Intel Venus Mobo, 64Mg Ram.-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1839760+0+/usr/local/www/mailindex/archive/1999/freebsd-questions/19990110.freebsd-questions
2021-11-27T15:02:03
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Adding or Updating Thresholds You can update the values for a threshold in Advisors. You can enter values for Lower-Bound Critical and Lower-Bound Warning, or Upper-Bound Warning and Upper-Bound Critical, or all four values. Depending on the metric, the value may be acceptable above or below a certain value. If for example, the threshold is defined with only Upper-Bound Warning of 50 and Upper-Bound Critical of 75, then a value between 50 and 75 triggers a warning. If the value is above 75, a critical violation is triggered. If the threshold is defined with a only Lower-Bound Warning of 75 and Lower-Bound Critical of 70, then a value between 70 and 75 triggers a warning. If the value is below 70, a critical violation is triggered. For a case in which all four values are set, the threshold values are defined to trigger if the value is below or above defined values. For example, values below 10 or above 90 might trigger a critical violation, values between 80 and 90 or between 10 and 20 trigger a warning violation, and values between 20 and 80 are acceptable. Procedure: Update application or contact group thresholds Steps - For CCAdv, click the Application Thresholds tab. For WA, click the Contact Group Thresholds tab. - Select an application group. - In the Thresholds panel, select a metric to work with. If you do not see the metric you want, then its Threshold Applicable setting is not set to Yes. To set it, go to the go to the Report Metrics page and change it there. - Type the values for the upper-bound and/or lower-bound limits for the selected metric. Your values are restricted by those in the Min and Max columns of the metric. To set new Min and Max values, go to the Report Metrics page and change them there. - To save the changes, click Save. A confirmation message displays. The values display on the Thresholds page. - Add any exceptions required. See Adding Threshold Exceptions.
https://docs.genesys.com/Documentation/PMA/latest/CCAWAUser/AddingorUpdatingThresholds
2021-11-27T15:09:37
CC-MAIN-2021-49
1637964358189.36
[]
docs.genesys.com
Timeout manager¶ Timeout manager allows application to call specific function at desired time. It is used in middleware (and can be used by application too) to poll active connections. Note Callback function is called from processing thread. It is not allowed to call any blocking API function from it. When application registers timeout, it needs to set timeout, callback function and optional user argument. When timeout elapses, ESP middleware will call timeout callback. This feature can be considered as single-shot software timer. - group LWESP_TIMEOUT Timeout manager. Typedefs Functions - lwespr_t lwesp_timeout_add(uint32_t time, lwesp_timeout_fn fn, void *arg)¶ Add new timeout to processing list. - lwespr_t lwesp_timeout_remove(lwesp_timeout_fn fn)¶ Remove callback from timeout list.
https://docs.majerle.eu/projects/lwesp/en/latest/api-reference/lwesp/timeout.html
2021-11-27T13:48:57
CC-MAIN-2021-49
1637964358189.36
[]
docs.majerle.eu
Deploying to Fleek #Introduction In this guide, we'll walk you how to deploy your Polywrapper to Fleek so that other dapps could integrate it into their dapps! #Setting up For this guide, we'll be using the Polywrap Demos repo. Clone the project onto your own machine: Then, we will check out the demo branch with the meta files already set up: Now, we can build the sample Polywrapper with the following commands: Your build folder should be generated now. Copy and paste the ./web3api.meta.yaml and ./meta files into the build folder. #Uploading the build folder to Fleek On the left hand menu of your Fleek account page, click on the "Storage" link. Then, click "Create Folder" and name it anything that you'd like. After that, click "Upload" to begin uploading contents of your build folder onto Fleek. #Verifying the package on IPFS For the last step, simply click the "Verify on IPFS" button. This will provide you with the IPFS hash! For an example of what you should see, visit this IPFS link. Now that you have the IPFS hash, you can use it as a value in the URI property of your Polywrap queries to access the mutation and query functions on this Polywrapper. You can also register an ENS domain and have it resolve to this IPFS content.
https://docs.polywrap.io/guides/deploy-fleek/
2021-11-27T13:36:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.polywrap.io
ePayePay Configure ePayConfigure ePay Login to the ePay administration. Change the language to English by clicking on the flag at the bottom left side. Click Settings -> Payment system in the menu to the left. Enter the domain of your e-commerce solution (automatically includes all subdomains). Then select that ePay should work with unique order ids, and enter a security key of your choice. Next click on the API / Webservices -> Access menu item. Enter a random webservice password and at the same time add the external IP address of your server running the webshop. Configure Tea CommerceConfigure Tea Commerce Create a payment method and select ePay as the payment provider. Now configure the settings. ePay supports a wide range of different settings which you can read more about in their documentation. Configure WebsiteConfigure Website To use the ePay payment provider, it is required by ePay, that you add the following script reference in your <head> section of your website. The script handles the opening of the payment window on the page itself or in a popup window. <script type="text/javascript" src=""></script>
https://docs.teacommerce.net/3.3.1/payment-providers/epay/
2021-11-27T14:21:14
CC-MAIN-2021-49
1637964358189.36
[array(['/img/a5669d2-epay-1.png', 'epay-1.png'], dtype=object) array(['/img/88d8631-epay-2.png', 'epay-2.png'], dtype=object) array(['/img/40133d0-epay-3.png', 'epay-3.png'], dtype=object)]
docs.teacommerce.net
Date: Sat, 27 Nov 2021 15:21:37 +0000 (GMT) Message-ID: <515493159.81585.1638026497793@9c5033e110b2> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_81584_1372644241.1638026497793" ------=_Part_81584_1372644241.1638026497793 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Contents:=20 Page options: Cancel job: Click this bu= tton to cancel your job while it is still in progress. NOTE: This option may not be available for all running = environments. Delete job: Delet= e the job and its results. Deleting a job cannot be undone. Do= wnload profile as JSON: If visual profiling was enabled for the jo= b, you can download a JSON representation of the profile to your desktop.= p> In the Overview tab, you can review the job status, its sources, and the= details of the job run. NOTE: If your job failed, you may be prompted with an e= rror message indicating a job ID that differs from the listed one. This job= ID refers to the sub-job that is part of the job listed in the Job summary= . Figure: Overview t= ab You can review a snapshot of the results of your job. Output Data: The output data section displays a preview of the generated output of yo= ur job. NOTE: This section is not displayed if the job fails.= p> You can also perform the following: View: If it is pr= esent, you can click the View link to view the job results in the datastore= where they were written. NOTE: The View link may not be available for all jobs.<= /p> Download : If it is = present, click the Download link to download the generated job result= s to your local desktop. Co= mpleted Stages: NOTE: If you chose to generate a profile of your= job results, the transformation and profiling tasks may be combined into a= single task, depending on your environment. If they are combined and profi= ling fails, any publishing tasks defined in the job are not launched. You m= ay be able to ad-hoc publish the generated results. See below. If present, you can click the S= how Warnings link to see any warnings pertaining to recipe errors,= including the relevant step number.To review the recipe and dependencies i= n your job, click View steps and dependencies. See th= e Dependencies tab below. Publish: You can also review the outputs generated as a result of your job. = To review and export any of the generated results, click View = all. See Outputs Destinations tab below. Job summary: Job ID: Unique identifier Transf= ormer page. If you have since modified the recipe, those changes are applie= d during the second run. See Trans= former Page. Execution summary: Manual - Job was= executed through the application interface. If the job has successfully completed, you can review the set of generat= ed outputs and export results. Figure: Output Des= tinations tab Actions: For each output, you can do the following: View details: Vie= w details about the generated output in the side bar. Download result: Download the generated output= to your local desktop. NOTE: Some file formats may not be downloadable to your= desktop. See below. Create imported dataset: Use the generated output to create a new imported dataset for use i= n your flows. See below. NOTE: This option is not available for all file formats= . Click one of the provided links to download the file through your browse= r to your local desktop. NOTE: If these options are not available, data download= may have been disabled by an administrator. Optionally, you can turn your generated results into new datasets f= or immediate use in Trifacta Wrangler. For the generated output, select <= strong>Create imported dataset from its context menu. NOTE: When you create a new dataset from your job resul= ts, the file or files that were written to the designated output location a= re used as the source. Depending on your backend datastore permissions are = configured, this location may not be accessible to other users. After the new output has been written, you can create new recipes from i= t. See Build Sequen= ce of Datasets. Review the visual profile of your generated results in the Profile tab.&= nbsp;Visual profiling can assist in identifying issues in your dataset that= require further attention, including outlier values. NOTE: This tab appears only if you selected to profile = results in your job definition. See Run Job Page. Figure: Profile ta= b In particular, you should pay attention to the mismatched values and mis= sing values counts, which identify the approximate percentage of affected v= alues across the entire dataset. For more information, see Overview of Visual Profiling.= NOTE: The computational cost of generating exact = visual profiling measurements on large datasets in interactive visual profi= les severely impacts performance. As a result, visual profiles across an en= tire dataset represent statistically significant approximations. NOTE: Trifacta Wrangler treats null values as missing values. = Imported values that are null are generated as missing values in job result= s (represented in the gray bar). See Manage Null Values. Tip: Mouse over the color bars to see counts of values = in the category. Tip: Use the horizontal scroll bar to see profiles of a= ll columns in wide datasets. In the lower section, you can explore details of the transformations of = individual columns. Use this area to explore mismatched or missing dat= a elements in individual columns. Depending on the data type of the column, varying information is display= ed. For more information, see Column Statistics Reference. Tip: You should review the type information for each co= lumn, which is indicated by the icon to the left of the column. In this tab, you can review a simplified representation of the flow from= which the job was executed. This flow view displays only the recipes and d= atasets that contributed to the generated results. Tip: To open the full flow, you can click its name in t= he upper-left corner. Figure: Dependency= graph tab Zoom menu: You can zoom the dependency graph canvas to display areas of interest in= the flow graph. The zoom control options are available at the top-right corner of t= he dependency graph canvas. The following are the available zoom options:= p> Tip: You can use the keyboard shortcuts listed i= n the zoom options menu to make quick adjustments to the zoom level.= Recipe actions: Limitations:
https://docs.trifacta.com/exportword?pageId=109906475
2021-11-27T15:21:38
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
Machine Learning for Neuroimagers¶ Overview¶ Machine Learning is a method of using data to train a classifier; this is called training data. The classifier is then provided with new data (also known as testing data), and it attempts to distinguish between different classes within the data based on the training data. The classifier’s performance is judged by its accuracy - how many of the testing data points it managed to correctly classify. After working through an example with AFNI to learn the basics, we will begin to use a Matlab package called The Decoding Toolbox. This will expand our options of different classifiers to use, such as searchlight algorithms and representational similarity analysis (RSA). To begin reading an overview of machine learning and how it is used with fMRI data, click the Next button; otherwise, select any of the chapters below to begin using either 3dsvm or The Decoding Toolbox. Note Thanks to Martin Hebart for helpful comments, especially about the statistical analysis of MVPA data. A useful series of PDFs and lectures can also be found here. Introduction to Machine Learning - Machine Learning: Introduction to Basic Terms and Concepts - Machine Learning Tutorial #1: Basic Example with Support Vector Machines - Machine Learning Tutorial #2: The Haxby Dataset - Machine Learning Tutorial #3: Preprocessing - Machine Learning Tutorial #4: Creating the Timing Files - Machine Learning Tutorial #5: MVPA Analysis with The Decoding Toolbox - Machine Learning Tutorial #6: Scripting - Machine Learning Tutorial #7: Group Analysis - Machine Learning Tutorial #8: Non-Parametric Analysis
https://andysbrainbook.readthedocs.io/en/latest/ML/ML_Overview.html
2021-11-27T15:18:44
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/Haxby_Fig3A.png', '../_images/Haxby_Fig3A.png'], dtype=object) ]
andysbrainbook.readthedocs.io
As a personal finance manager, KMyMoney is not intended to scale to the needs of an average business. A small business might find sufficient functionality to use KMyMoney, but features such as accounts-receivable and accounts-payable are not directly supported. However, it would be possible to create client, supplier, document, or other features through plugins. KMyMoney is not simply a clone of the commercially available personal finance programs. Although many of the features present in KMyMoney can be found in other similar applications, KMyMoney strives to present an individual and unique view of your finances.
https://docs.kde.org/stable5/en/kmymoney/kmymoney/what-kmymoney-is-not.html
2021-11-27T14:27:41
CC-MAIN-2021-49
1637964358189.36
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
lightkurve.seismology.Seismology.estimate_mass¶ - Seismology.estimate_mass(teff=None, numax=None, deltanu=None)[source]¶ Calculates mass mass as M = Msol * (numax/numax_sol)^3(deltanu/deltanusol)^-4(Teff/Teffsol)^1.5 where M is the mass and Teff is the effective temperature, and the suffix ‘sol’ indicates a solar value. In this method we use the solar values for numax and deltanu as given in Huber et al. (2011) and for Teff as given in Prsa et al. (2016).. - deltanufloat The frequency spacing between two consecutive overtones of equal radial degree. If not given an astropy unit, assumed to be in units of microhertz. - tefffloat The effective temperature of the star. In units of Kelvin. - numax_errfloat Error on numax. Assumed to be same units as numax - deltanu_errfloat Error on deltanu. Assumed to be same units as deltanu - teff_errfloat Error on Teff. Assumed to be same units as Teff. - Returns - mass SeismologyQuantity Stellar mass estimate.
https://docs.lightkurve.org/reference/api/lightkurve.seismology.Seismology.estimate_mass.html
2021-11-27T14:09:19
CC-MAIN-2021-49
1637964358189.36
[]
docs.lightkurve.org
Instructions Operations Extensions. Get Async(IInstructionsOperations, String, String, String, CancellationToken) Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Get the instruction by name. These are custom billing instructions and are only applicable for certain customers. public static System.Threading.Tasks.Task<Microsoft.Azure.Management.Billing.Models.Instruction> GetAsync (this Microsoft.Azure.Management.Billing.IInstructionsOperations operations, string billingAccountName, string billingProfileName, string instructionName, System.Threading.CancellationToken cancellationToken = default); static member GetAsync : Microsoft.Azure.Management.Billing.IInstructionsOperations * string * string * string * System.Threading.CancellationToken -> System.Threading.Tasks.Task<Microsoft.Azure.Management.Billing.Models.Instruction> <Extension()> Public Function GetAsync (operations As IInstructionsOperations, billingAccountName As String, billingProfileName As String, instructionName As String, Optional cancellationToken As CancellationToken = Nothing) As Task(Of Instruction) Parameters - operations - IInstructionsOperations The operations group for this extension method. - cancellationToken - CancellationToken The cancellation token.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.billing.instructionsoperationsextensions.getasync?view=azure-dotnet
2021-11-27T14:33:13
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Files are imported based on the default file encoding for Trifacta Wrangler.As needed, you can override this default file encoding during the importing of individual datasets. NOTE: All output files are written in UTF-8 encoding. For a list of supported types for input, see Supported File Encoding Types. file to import in the Import Data page, click Edit Settings for the dataset card in the right panel. - From the drop-down, select the preferred encoding to apply to this specific file. - Continue the import process. This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=135013666
2021-11-27T15:19:17
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
ResearchStorage Research Storage provides versatile containers for your data sets. The containers include dedicated space available on the Cheaha HPC platform. What do I get Each active user on Cheaha receives an annual 1-terabyte allocation of Research Storage at no direct cost to the user. This container is dedicated to your storage needs and is attached directly to the cluster. Additional storage capacity can purchased, see details below. How to get it Once you log in to Cheaha, you can access your default 1TB storage container here: cd /rstore/user/$USER/default You can check to see how much storage you have used in your container with the command: df -h /rstore/user/$USER/default How to use it For HPC work flows You can use this storage in any way that you find useful to assist you with your HPC work flows on Cheaha. You should still follow good high performance compute work flow recommendations and stage data sets $USER_SCRATCH during your job runs, especially if those data sets are heavily accessed or modified as part of your job operations. In general, a good use this storage for keeping larger data sets on the cluster longer than the lifetime of your active computations and for stuff that is too big to fit in your home directory. For Retaining Scratch Data One near term use for your storage container would be to safely preserve important data in your $USER_SCRATCH so that it is not destroyed by the upcoming scratch file system rebuild during the May 3-10 cluster service window. You can follow these steps to move a copy of important files from your personal scratch space to your default Research Storage container: - Remove any data from $USER_SCRATCH that you no longer use or want - Copy your remaining important data to your default container using rsync rsync -a --stats $USER_SCRATCH/ /rstore/user/$USER/default/scratch How to get more You can buy any amount of additional storage at a rate of $0.38/Gigabyte/year. That's $395/Terabyte/year. All we need to know is how much storage you want, for how long, and an account number. UAB IT will bill you monthly for the storage you consume What about backups? No Backups! There is no central back up process on the cluster. Each user is responsible for backing up their own data. If you are not managing a back up process for data on the cluster then you do not have any back ups. This rule includes the new Research Storage containers. Please understand we do not say this out of malice or a lack of concern for your data. Central backup processes are inherently ignorant and must assume all files are important. This is done at the very real expense of keeping multiple copies of data. In the context of large scale data sets which are typical of our HPC environment, this would amount to 100's of terabytes of data. Duplicating the footprint of data for which we are already busting at the seams to support a single copy. It is much better for individuals, teams, or labs and their technical support staff critical data and ensure it is backed up. How to Backup We understand this process can be difficult, especially if you are your own technical support staff. To that end, we have a new backup service available that leverages CrashPlan, a popular commercial backup product that will help you easily back up your data on your laptop or in your lab. Please contact us if you are interested in using CrashPlan to fulfil your responsibilities for backing up your own data.
https://docs.uabgrid.uab.edu/w/index.php?title=ResearchStorage&direction=prev&oldid=4772&printable=yes
2021-11-27T13:59:03
CC-MAIN-2021-49
1637964358189.36
[]
docs.uabgrid.uab.edu
... The See the following topics for more details: Jaeger (OpenTracing) Jaeger is a distributed tracing system that supports the OpenTracing standard. The ESB profile of WSO2 EI 6.6.0 can be configured with Jaeger so that message flow traces can be visualized from the Jaeger UI. See the following topics for more details:children
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=141241747&originalVersion=7&revisedVersion=10
2021-11-27T15:01:22
CC-MAIN-2021-49
1637964358189.36
[]
docs.wso2.com
java.util.concurrent Execution("Repository", this.repository); Credentials credentials = new SimpleCredentials("jsmith", "secret".toCharArray()); sessionFactory.registerCredentials("Repository/Workspace1", credentials); JcrExecutionContext context = new BasicJcrExecutionContext(sessionFactory,"Repository/Workspace1"); // Create the sequencing service, passing in the execution context ... SequencingService sequencingService = new SequencingService(); sequencingService.setExecutionContext(context); imageS ExecutionContext. Now that we've covered path expressions, let's go back to the three sequencer configuration in the example. Here they are again, with a description of what each path means: After these sequencer configurations are defined and added to the SequencingService, the service is now ready to start reacting to changes in the repository and automatically looking for nodes to sequence. But we first need to wire the service into the repository to receive those change events. observation service is pretty easy, especially if you reuse the same SessionFactory supplied to the sequencing service. Here's an example: this.observationService = new ObservationService(sessionFactory); this.observationService.getAdministrator().start(); Both ObservationService and SequencingS/jboss/example/dna represent the information using instances of ContentInfo (and its subclasses) and then passing samples are more readable and concise.) ... Workspace workspace = this.keepAliveSession.getWorkspace(); JackrabbitNodeTypeManager mgr = (JackrabbitNodeTypeManager)workspace was not already started), and proceeds to create and configure the SequencingService as described earlier. This involes setting up the SessionFactory and. imageSequencerConfig = new SequencerConfig(name, desc, classname, classpath, pathExpressions); this.sequencingService.addSequencer(imageSequencerConfig); // Set up the MP3 sequencer ... name = "Mp3 Sequencer"; desc = "Sequences mp3 files to extract the id3 tags);); // used for automatically sequencing a variety of types of information, and how those components.
http://docs.jboss.org/jbossdna/0.2/manuals/gettingstarted/html/using_dna_for_sequencing.html
2016-05-24T17:26:07
CC-MAIN-2016-22
1464049272349.32
[]
docs.jboss.org
To draw an object on the screen, the engine has to issue a draw call to the graphics API (e.g. OpenGL or Direct3D). Draw calls are often expensive, with the graphics API doing significant work for every draw call, causing performance overhead on the CPU side. This is mostly caused by the state changes done between the draw calls (e.g. switching to a different material), which causes expensive validation and translation steps in the graphics driver. Unity uses several techniques to address this: Built-in batching has several benefits compared to manually merging objects together (most notably, the objects can still be culled individually). But it also has some downsides too (static batching incurs memory and storage overhead; and dynamic batching incurs some CPU overhead). a single material instead. If you need to access shared material properties from the scripts, then it is important to note that modifying Renderer.material will create a copy of the material. Instead, you should use Renderer.sharedMaterial to keep material shared. While rendering shadow casters, they can often be batched together even if their materials are different. Shadow casters in Unity can use dynamic batching even with different materials, as long as the values in materials needed by the shadow pass are the same. For example, many crates could use materials with different textures on them, but for shadow caster rendering the textures are not relevant – in that case they can be batched together. Unity can automatically batch moving objects into the same draw call if they share the same material and fulfill other criteria. Dynamic batching is done automatically and does not require any additional effort on your side. Since it works by transforming all object vertices into world space on the CPU, it is only a win if that work is smaller than doing a “draw call”. How exactly expensive is a draw call depends on many factors, primarily on the graphics API used. For example, on consoles or modern APIs like Apple Metal the draw call overhead is generally much lower, and often dynamic batching can not be a win at all. Static batching allows the engine to reduce draw calls for geometry of any size (provided it does not move and shares the same material). It is often more efficient than dynamic batching (it does not transform vertices on the CPU), but it uses more memory. for storing the combined geometry. If several objects shared the same geometry before static batching, then a copy of geometry will be created for each object, either in the Editor or at runtime. This might not always be a good idea - sometimes you will have to sacrifice rendering performance by avoiding static batching for some objects to keep a smaller memory footprint. For example, marking trees as static in a dense forest level can have serious memory impact. Internally, static batching works by transforming the static objects into world space and building a big vertex + index buffer for them. Then for visible objects in the same batch, a series of “cheap” draw calls are done, with almost no state changes in between. So technically it does not save “3D API draw calls”, but it saves on state changes done between them (which is the expensive part). Currently, only Mesh Renderers are batched. This means that skinned meshes, cloth, trail renderers and other types of rendering components are not batched. Semitransparent shaders most often require objects to be rendered in back-to-front order for transparency to work. Unity first orders objects in this order, and then tries to batch them - but because the order must be strictly satisfied, this often means less batching can be achieved than with opaque objects. Manually combining objects that are close to each other might be a very good alternative to draw call batching. For example, a static cupboard with lots of drawers often makes sense to just combine into a single mesh, either in a 3D modeling application or using Mesh.CombineMeshes.
http://docs.unity3d.com/Manual/DrawCallBatching.html
2016-05-24T15:33:05
CC-MAIN-2016-22
1464049272349.32
[]
docs.unity3d.com
Setup Button The setup button is located on the bottom side of the DS100. The button can be pushed by a sharp tip of a pencil, pen, etc. The Button is used to select an operating mode of the DS100: * Strictly speaking, this is a functionality that is defined by the application firmware, not the DS100 hardware.
http://docs.tibbo.com/soism/ds100_button.htm
2009-07-03T17:53:47
crawl-002
crawl-002-025
[]
docs.tibbo.com
You can view audit logs for record Note: You can only view audit logs if your Alfresco administrator has given you the Access Audit permission.. Note: Users with access to the RM Admin Tools can run an audit of the entire Records Management system. Filing a report Whenever a record or folder is transferred, added to a hold, accessioned, or destroyed, you can file a report to keep a record of the process. When you file a report it’s filed as a record which you can then complete and process as with any other record. In the File Plan hover over a destroyed folder or record, or a folder or record awaiting transfer or accession completion, and click File Report. Note: Records and folders waiting for transfer and accession completion are stored by default in the Transfers area in the explorer panel. Records on a hold are stored by default in the Holds area in the explorer panel. Reports are filed by default to the Unfiled Records area of the File Plan. To select an alternate location deselect the File report to ‘Unfiled Records’ option and choose a different destination folder. Note: As with all records you must select a folder, not a category, to file the report to. Click File Report. The report is filed as an incomplete record in your selected destination. Viewing an audit log You can view audit logs for record categories, record folders, and records. Hover over a record category, folder, or record in the File Plan and click More then View Audit Log. Note: You can only view audit logs if your Alfresco administrator has given you the Access Audit permission. The audit log displays. You can click Export to export the audit log, or File as Record to select a location in the File Plan and file the audit log as a record.
https://docs.alfresco.com/governance-services/latest/using/audit-report/
2021-07-23T22:05:36
CC-MAIN-2021-31
1627046150067.51
[]
docs.alfresco.com
- Re-using Queries - Data Size - Maintenance Overhead - Finding Unused Indexes - Requirements for naming indexes - Temporary indexes Adding-tree indexes. These indexes however generally take up more data and are slower to update compared to B-tree (for example, fewer than 1,000 records) or any existing indexes filter out enough rows you may not want to add a new index. Maintenance. Requirements for naming indexes Indexes with complex definitions need to be explicitly named rather than relying on the implicit naming behavior of migration methods. In short, that means you must provide an explicit name argument for an index created with one or more of the following options: where using order length type opclass Considerations for index names Index names don’t have any significance in the database, so they should attempt to communicate intent to others. The most important rule to remember is that generic names are more likely to conflict or be duplicated, and should not be used. Some other points to consider: - For general indexes, use a template, like: index_{table}_{column}_{options}. - For indexes added to solve a very specific problem, it may make sense for the name to reflect their use. - Identifiers in PostgreSQL have a maximum length of 63 bytes. - Check db/structure.sqlfor conflicts and ideas. Why explicit names are required As Rails is database agnostic, it generates an index name only from the required options of all indexes: table name and column name(s). For example, imagine the following two indexes are created in a migration: def up add_index :my_table, :my_column add_index :my_table, :my_column, where: 'my_column IS NOT NULL' end Creation of the second index would fail, because Rails would generate the same name for both indexes. This is further complicated by the behavior of the index_exists? method. It considers only the table name, column name(s) and uniqueness specification of the index when making a comparison. Consider: def up unless index_exists?(:my_table, :my_column, where: 'my_column IS NOT NULL') add_index :my_table, :my_column, where: 'my_column IS NOT NULL' end end The call to index_exists? will return true if any index exists on :my_table and :my_column, and index creation will be bypassed. The add_concurrent_index helper is a requirement for creating indexes on populated tables. Since it cannot be used inside a transactional migration, it has a built-in check that detects if the index already exists. In the event a match is found, index creation is skipped. Without an explicit name argument, Rails can return a false positive for index_exists?, causing a required index to not be created properly. By always requiring a name for certain types of indexes, the chance of error is greatly reduced. Temporary indexes There may be times when an index is only needed temporarily. For example, in a migration, a column of a table might be conditionally updated. To query which columns need to be updated within the query performance guidelines, an index is needed that would otherwise not be used. In these cases, a temporary index should be considered. To specify a temporary index: - Prefix the index name with tmp_and follow the naming conventions and requirements for naming indexes for the rest of the name. - Create a follow-up issue to remove the index in the next (or future) milestone. - Add a comment in the migration mentioning the removal issue. A temporary migration would look like: INDEX_NAME = 'tmp_index_projects_on_owner_where_emails_disabled' def up # Temporary index to be removed in 13.9 add_concurrent_index :projects, :creator_id, where: 'emails_disabled = false', name: INDEX_NAME end def down remove_concurrent_index_by_name :projects, INDEX_NAME end
https://docs.gitlab.com/ee/development/adding_database_indexes.html
2021-07-23T23:08:01
CC-MAIN-2021-31
1627046150067.51
[]
docs.gitlab.com
Commerce v1 Class Reference Formatters Formatters are generalised functions that take in a raw value, and output a formatted value in a way that people would expect the value to be rendered. They are called automatically for many fields in the model, available as <field>_formatted in templates and code. To call a formatter from a model (taking into account the currency specified in the currency field, if it exists), use: return $this->formatValue($value, 'name-of-formatter'); From external code, such as snippets, you can also call the formatValue method on the Commerce service class. return $commerce->formatValue($value, 'name-of-formatter'); For financial formatting where you have the relevant currency object, you can also use $currency->format($amountInCents).
https://docs.modmore.com/en/Commerce/v1/Class_Reference/Formatters/index.html
2021-07-23T22:00:07
CC-MAIN-2021-31
1627046150067.51
[]
docs.modmore.com
WSO2 Governance Registry. Do you recommend a central database to store server configurations or a local database for each server to keep the system as stateless as possible? There is an embedded registry component that comes by default with every WSO2 product. In a standalone deployment of a WSO2 product, configurations are stored in this local registry, which can either run with its embedded H2 database or mounted onto an external database such as MySQL, Oracle etc..
https://docs.wso2.com/pages/viewpage.action?pageId=33132874
2021-07-23T23:11:09
CC-MAIN-2021-31
1627046150067.51
[]
docs.wso2.com
To start, click the Create Project button at the top (Remember, you must login to see that button!) You will presented with a form to fill in (No worries, it takes 1 minute!) Once you have decided on the kind of project you wish to undertake, the first step is to give your project a name. Your project name will be used within the web address assigned to your project so should be relatively short, with a maximum length of 50 chars. Project names must be unique, so you cannot use a name if someone else already took it. Think of it like a domain or url you register to yourself. Project name can contain only letters, numbers, underscores (_), dashes (-), and spaces. No special symbols. You can use both uppercase and lowercase. Maximum length is 50 chars, minimum length is 3 chars. We recommend a project name around 20 chars. The project name will be use to create a friendly url to your project (called slug) using only lowercase and dashes Have a look at the examples below: This is a short description of what your project is all about. It is basically a summary. You will be able to add a longer full description later on if you wish. A project must have at least one form to work with. It is the name you give to your questionnaire. You can add other nested forms later if you need. A form name can contain only letters, numbers, underscores (_), dashes (-) and spaces. No special symbols. You can use both uppercase or lowercase. The form name maximum length is 50 chars. A project can be private (accessible only to users you specify) or public (accessible to everyone). You can fine tune the type of access control on your data and project settings later. The private option is set as the default. That means you are the only one who can access the project (It requires login on both the server and the mobile app to be accessed). We recommend you leave the project as private as long as you are building and testing it, and setting it to public later when your form(s) is finalised, if you wish, or keep it as private and add users to it. Once you have done, click the Create button to create a project.
https://docs.epicollect.net/web-application/create-a-project
2021-07-23T22:28:11
CC-MAIN-2021-31
1627046150067.51
[]
docs.epicollect.net
Account Management Managing your account, creating new Users, pricing details, exporting data - Security at Help Scout - Enable SSO with OneLogin as the Identity Provider - User Roles and Permissions - Help Scout and HIPAA - Legacy Plan Changes - Log in to Help Scout - Data Export Options - Set Up Two-Factor Authentication - Transfer Account Ownership - Enabling SSO with a Generic Identity Provider - Session Management
https://docs.helpscout.com/category/108-account-management/2?sort=updatedAt
2021-07-23T21:03:53
CC-MAIN-2021-31
1627046150067.51
[]
docs.helpscout.com
Returns true if the update was successful. Incrementally updates the NavMeshData based on the sources. (UnityEngine). Try to provide the sources that have not changed since the last update in the same relative order as before because their sequence can affect the values of the hashes. This measure ensures that unchanged portions don't get rebuilt unnecessarily. You must supply a Bounds struct for the localBounds parameter. See Also: NavMeshBuilder.UpdateNavMeshDataAsync.
https://docs.unity3d.com/2019.2/Documentation/ScriptReference/AI.NavMeshBuilder.UpdateNavMeshData.html
2021-07-23T22:25:20
CC-MAIN-2021-31
1627046150067.51
[]
docs.unity3d.com
Validate signatures¶ This guide describes how to validate the contents of a Clear Linux* OS image. Overview¶ Validating the contents of an image is a manual process and is the same process that swupd performs internally. Clear. Image content validation¶ In the steps below, we used the installer image of the latest release of Clear Linux OS. You may use any image of Clear Linux OS you choose. Download the image, the signature of the SHA512 sum of the image, and the Clear Linux OS certificate used for signing the SHA512 sum. # Image curl -O(curl)-installer.img.xz # Signature of SHA512 sum of image curl -O(curl)-installer.img.xz-SHA512SUMS.sig # Certificate curl -O(curl)/clear/ClearLinuxRoot.pem Generate the SHA256 sum of the Clear Linux OS certificate. sha256sum ClearLinuxRoot.pem Ensure the generated SHA256 sum of the Clear Linux OS OS certificate. This confirms that the image is trusted and has not been modified. openssl smime -verify -purpose any -in clear-$(curl)-installer.img.xz-SHA512SUMS.sig -inform der -content sha512sum.out -CAfile ClearLinuxRoot.pem Note The -purpose any option is required when using OpenSSL 1.1. If you use an earlier version of OpenSSL, omit this option to perform signature validation. The openssl version command may be used to determine the version of OpenSSL in use. The output should contain “Verification successful”. If the output contains “bad_signature” anywhere, then the image is not trustworthy. Update content validation¶ Confirm that the generated SHA256 sum of the swupd certificate matches the SHA256 sum shown Confirm that the signature of the MoM was created using the Swupd certificate. This signature validates the update content is trustworthy and has not been modified. openssl smime -verify -purpose any -in Manifest.MoM.sig -inform der -content Manifest.MoM -CAfile Swupd_Root.pem Note The -purpose any option is required when using OpenSSL 1.1. If you use an earlier version of OpenSSL, omit this option to perform signature validation. The openssl version command may be used to determine the version of OpenSSL in use..
https://docs.01.org/clearlinux/latest/guides/maintenance/validate-signatures.html
2021-07-23T22:49:01
CC-MAIN-2021-31
1627046150067.51
[]
docs.01.org
Installation - 1 Introduction - 2 Memory Requirements - 3 Windows Installation - 4 Unix Installation - 5 Registration numbers for APGetInfo Introduction APGetInfo is a command-line application that retrieves information from PDF documents without Adobe® Acrobat®. APGetInfo on all UNIX platforms. If you run APGetInfo from the apgetinfo script created during installation, these environmental variables will be set by the script. If you run apgetinfoapp directly, you will need to set these environmental variables to run APGetInfo. Once APGetInfo is installed, you can view the variables needed by looking at the apgetinfo APGetInfo. It also contains license information for APGetInfo. APGetInfo In previous versions of APGetInfo, the APGetInfo registration number was required as a command line option with the -r flag each time apgetinfoapp was run. An apgetinfo script was provided that automatically added -r and the registration number to the apgetinfoapp command line when the script was run. In the current release, in addition to getting the registration number from the command line, APGetInfo.
https://docs.appligent.com/apgetinfo/apgetinfo-installation/
2021-07-23T22:36:46
CC-MAIN-2021-31
1627046150067.51
[]
docs.appligent.com
Trusted Security Circles The Trusted Security Circles application allows you and other users to generate and receive community-sourced observables (in the form of IP addresses, hashes, domains, URLs, and so forth) with the goal of improving threat prioritization and to shorten the time to identify and remediate threats. Explore Trusted Security Circles overview Domain separation and Trusted Security Circle Upgrade to Quebec. Security Operations videos Set up Activate the Trusted Security Circle Client Create a Trusted Security Circle profile Administer Set Trusted Security Circle properties Trusted Security Circles overview Use Trusted Security Circles overview Trusted Security Circles and Threat Intelligence sharing guidelines Trusted Security Circles messages Develop Developer training Developer documentation Components installed with Threat Intelligence Sharing Integrate ServiceNow Security Operations integration development guidelines Tips for writing integrations Troubleshoot and get help Ask or answer questions in the Security Operations community Search the Known Error Portal for known error articles Contact Customer Service and Support
https://docs.servicenow.com/bundle/quebec-security-management/page/product/trusted-circles/reference/trusted-circles-landing-page.html
2021-07-23T23:27:39
CC-MAIN-2021-31
1627046150067.51
[]
docs.servicenow.com
Helpful RadGrid Resources This page contains links to examples that you may find useful when implementing various scenarios with the grid. Even if you do not see the exact requirement you have, similar setups may give you ideas and show you ways to access various controls and properties that will let you achieve your goal. Appearance - 100% Height for RadGrid - this code library page explains how to set the grid height in percent values. The key requirement is that all of its parent html elements also have height set in percents, up to a parent with a fixed height (including the html, body, form elements and all Update panels. See also the Scrolling, Height vs ScrollHeight and Scroll With Static Headers articles. DataEditing: - Batch Editing Extensions - Related RadComboBoxes and Batch Validation- This Code Library provides an extension for the RadGrid Batch Editing functionality, which allows you to implement related RadComboBoxes functionality between column and to set Batch Validation. - Performing updates/inserts containing HTML for a Batch Editing grid- The Code Library illustrates how one can use HTML to edit a certain field data in a batch editing grid. - Manual Insert/Update/Delete operations using Auto-generated editform with sql statements from the code-behind- Demonstrates manual Insert/Update/Delete operations using Auto-generated editform with sql statements from the code-behind - Prevent Losing Batch Editing Changes on Paging or any other PostBack- This code-library demonstrates how to prevent the action if there are any unsaved Batch changes. - Copy-Paste Cell/Row data through RadContextMenu with Batch Editing-This code library demonstrates how to implement Copy-Paste functionality for cells and rows for RadGrid in Batch Edit Mode with RadContextMenu. - Manual CRUD Operations with LinqDataSource- The current code library demonstrates RadGrid's capability for inserting new data, updating existing data and deleting data handled using RadGrid public API and Linq to SQL data context. - Automatic operations with ObjectDataSource control- This demo represents how to perform automatic operations (update/insert/delete) with ObjectDataSource control - Multi-column edit form- In this code library the edit form spans across three columns. - Combining different edit modes- This project demonstrates how to combine different edit modes in RadGrid.The main idea behind this approach is to fire custom commands for different edit modes in such manner, so they could be parsed to GridEditMode type. - Select Last Inserted Row- This project describes how to select the last inserted row in the RadGrid. DataBinding: - Integration with SignalR- The sample illustrates how RadGrid can be integrated with SignalR so that the data on multiple clients is updated automatically - RadGrid - Client-side Databinding with WebAPI- The attached project shows integration of RadGrid and RadClientDataSource with Web API. It shows basic databinding and batch editing sending and receiving information through Web API Filtering: - Multi-Selection RadComboBox for filtering grid- This project shows how you could use Multi-Selection RadComboBox (with checkboxes in ItemTemplate) in the FilterTemplate of RadGrid - Conditionally hide controls from Excel-Like filtering menu- This web site demonstrates how to modify the visibility of the elements inside the Excel-Like filtering menu both on server and client-side. - Add "Clear filter" icon to all your GridCheckBox columns - Quick Search - filtering rows depending on entered text instantly when paging is disabled- When RadGrid contains only one single page, all of its data items are loaded and rendered on the web page. We can make avail of this and implement client-side search RadTextBox which will instantly filter the grid items over a condition without posting back to the server. - Single filter button for all columns in RadGrid- The project demonstrates how a single filter button can be implemented to filter all columns in RadGrid. - Filtering and sorting for GridButtonColumn- The example illustrates how to extend the built-in GridButtonColumn to support filtering and sorting. - How to make RadGrid Filtering Controls Resize Together with the Columns- The following demo shows how to make the filtering controls (textboxes, RadNumericTextBoxes, RadDatePickers) resize in real time together with the RadGrid columns. Exporting: - Export RadGrid with HtmlCharts- This code library demonstrates how to export RadGrid and HtmlChart controls to PDF document by using both the built-in export functionality of RadGrid and RadClientExportManager control. - Styling and formatting Word and Excel document- Generally most of the applied styles and formats to RadGrid are properly exported without additional modification. Nevertheless, in some cases you might need to add a styles and/or formats only to the exporting document and this code library demonstrates exactly that. For this purpose you can use the InfrastructureExporting event handler when you are using a binary based export format. - Export RadGrid with hierarchy and grouping to Excel and Word- This code library demonstrates how to use the Export Infrastructure to export hierarchical RadGrid with grouping. - Persisting expanded groups when exporting to Excel (HTML)- Sometimes it may be useful to persist the expanded groups when exporting RadGrid data. - Multiple worksheets in ExcelML - Export multiple RadGrids in single PDF/Excel file- This project illustrates how to export multiple RadGrid controls into single Excel/PDF file by using another RadGrid to wrap the contents. Grouping: - Custom Range Grouping- This project demonstrates how to create custom range grouping with RadGrid when using Advanced Data-Binding through NeedDataSource event. - Grouping + Conditional Formatting + Dynamic Control- Demonstrates how one can customize the rad-grid and can programmatically, add dynamic controls such as link button, show/hide columns & headers along with grouping. - ExpandCollapseGroupedGridOnDoubleClickingTheGroupHeader- This project illustrates how to expand/collapse grouped items in RadGrid on double clicking the Group header. - Grouping single column at a time. Scrolling: - RadGrid scrolling with predefined step- This project shows how to move the horizontal and vertical scroll bars of the RadGrid with given by a developer step.. Hierarchy: - Autogenerated hierarchy- This code library demonstrates RadGrid's capability to auto-generate a hierarchical representation of a mutli-table DataSet. - Accessing and validating controls client-side inside a hierarchical RadGrid- This demo demonstrates how to access the parent and child grid's rows on client-side. - Select items in hierarchy, depending on selection in inner levels- The following project demonstrates selecting items in a hierarchy, depending on selection in inner levels. Drag and Drop: - Cancel row drag and drop when ESC key is pressed- This project demonstrates how to handle the client side RowDragging event of RadGrid to modify the appearance of the dragged row. It also displays the current position of the dragged item and provides an approach to cancel item dragging upon pressing the ESC key. - Drag and drop between flat and hierarchical grid- This project shows how to implement drag and drop between flat (source) and hierarchical (target) grids.
https://docs.telerik.com/devtools/aspnet-ajax/controls/grid/how-to/Other-resources
2021-07-23T23:02:09
CC-MAIN-2021-31
1627046150067.51
[]
docs.telerik.com
Script interface for the TriggerModule of a Particle System. This module is useful for killing particles when they touch a set of collision shapes, or for calling a script command to let you apply custom particle behaviors when the trigger is activated. The example code for MonoBehaviour.OnParticleTrigger shows how the callback type action works. Particle System modules do not need to be reassigned back to the system; they are interfaces and not independent objects.
https://docs.unity3d.com/ja/2019.4/ScriptReference/ParticleSystem-trigger.html
2021-07-23T23:48:24
CC-MAIN-2021-31
1627046150067.51
[]
docs.unity3d.com
Caching edges and properties How to configure graph caching for edges and properties. Caching can improve query performance and is configurable. DSE Graph has two types of cache: adjacency list cache and index/property cache. Either edges or properties can be cached using the schema API vertexLabel() method with the cache() option. Caching can be configured for all edges, all properties, or a filtered set of edges. Vertices are not cached directly, but caching properties and edges that define the relationship between vertices essentially accomplishes the same operation. Property caching is enabled if indexes exist and are used in the course of queries. Full graph scan queries will not be cached. If an index does not exist, then caching does not occur. Adjacency list caching is enabled if caching is configured for edges. The caches are local to a node and data is loaded into cache when it is read with a query. Both caches are set to a default size of 128 MB in the dse.yaml file. The settings are adjacency_cache_size_in_mb and index_cache_size_in_mb. Both caches utilize off-heap memory implemented as Least Recently Used (LRU) cache. Caching is intended to help make queries more efficient if the same information is required in a later query. For instance, caching the calories property for meal vertices will improve the retrieval of a query asking for all meals with a calorie count less than 850 calories. Graph cache is local to each node in the cluster, so the cached data can be different between nodes. Thus, a query can use cache on one node, but not on another. The caches are updated only when the data is not found. Graph caching does not have any means of eviction. No flushing occurs, and the cache is not updated if an element is deleted or modified. The cache will only evict data based on the time-to-live (TTL) value set when the cache is configured for an element. Set a low TTL value for elements (property keys, vertex labels, edge labels) that change often to avoid stale data. Graph cache is useful for rarely changed graph data. The queries that will use graph cache effectively are queries that repeatedly run. If the queries run differ even in the sort order, the graph cache will not be used to reduce the query latency. For instance, caching the calories property for meal vertices will improve the retrieval of a query asking for all meals with a calorie count less than 850 calories, if this query is repeated. Note that all properties for all meal vertices will be cached along with calories. Procedure - Cache all properties for authorvertices up to an hour (3600 seconds): schema.vertexLabel('author').cache().properties().ttl(3600).add() Enabling property cache causes index queries to use IndexCache for the specified vertex label. - Cache both incoming and outgoing creatededges for authorvertices up to a minute (60 seconds): schema.vertexLabel('author').cache().bothE('created').ttl(60).add()
https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/graph/using/caching.html
2018-11-13T00:38:02
CC-MAIN-2018-47
1542039741176.4
[]
docs.datastax.com
- WHMCS Billing System - Web Host Manager Complete Solution, is one of the leading web hosting management and billing softwares, which automates all aspects of the businessTip: The WHMCS integration is ensured with a help of the appropriate Jelastic plugin. Below, you can check the detailed description and comparison of the two available versions - PBA Billing System - Parallels (Odin) Business Automation, is a billing automation solution for the small and medium web hosting businesses management and scaling PBAS Billing System - Parallels (Odin) Business Automation Standard, is an automation solution, which supports management of all the business aspects of a web hosting service Also, some particular hosting providers can opt to use custom billing system, in such a case refer to the appropriate provider’s documentation for the additional details.
https://docs.jelastic.com/billing-system
2018-11-13T00:15:08
CC-MAIN-2018-47
1542039741176.4
[]
docs.jelastic.com
Bug Check 0xD2: BUGCODE_ID_DRIVER The BUGCODE_ID_DRIVER bug check has a value of 0x000000D2. This indicates that a problem occurred with an NDIS driver. Important This topic is for programmers. If you are a customer who has received a blue screen error code while using your computer, see Troubleshoot blue screen errors. BUGCODE_ID_DRIVER Parameters In the following instances of this bug check, the meaning of the parameters depends on the message and on the value of Parameter 4. Remarks This bug check code only occurs on Windows 2000 and Windows XP. In Windows Server 2003 and later, the corresponding code is bug check 0x7C (BUGCODE_NDIS_DRIVER). On the checked build of Windows, only the Allocating Shared Memory at Raised IRQL and Completing Reset When One is Not Pending instances of this bug check can occur. All the other instances of bug check 0xD2 are replaced with ASSERTs. See Breaking Into the Debugger for details.
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0xd2--bugcode-id-driver
2018-11-13T00:24:18
CC-MAIN-2018-47
1542039741176.4
[]
docs.microsoft.com
$. Note If you wish to compile using Clang rather than GCC, use this command: user@host:~/godot$ scons platform=x11 use_llvm=yes:~/[username]/.
https://godot.readthedocs.io/en/latest/development/compiling/compiling_for_x11.html
2018-11-13T01:24:21
CC-MAIN-2018-47
1542039741176.4
[array(['../../_images/lintemplates.png', '../../_images/lintemplates.png'], dtype=object) ]
godot.readthedocs.io
A dashboard state describes the changes resulting from end-user interaction. The dashboard state object is the DashboardState class instance. It may contain: The DashboardControl provides the following API to manage the dashboard state: Gets the current dashboard state. Applies the dashboard state to the loaded dashboard. Allows you to specify the initial dashboard state when loading a dashboard.
https://docs.devexpress.com/Dashboard/400144/building-the-designer-and-viewer-applications/wpf-viewer/manage-dashboard-state
2018-11-13T00:24:28
CC-MAIN-2018-47
1542039741176.4
[]
docs.devexpress.com
Security Bulletin Microsoft Security Bulletin MS13-030 - Important Vulnerability in SharePoint Could Allow Information Disclosure (2827663) Published: April 09, 2013 Version: 1.0 General Information. The security update addresses the vulnerability by correcting the default access controls applied to the SharePoint list. Server Software [1]This update requires prior installation of the Project Server 2013 cumulative update (2768001). For more information about the update, including download links, see Microsoft Knowledge Base Article 2768001. Non-Affected Software Update FAQ I have the affected software installed on my system. Why am I not being offered the update? The 2737969 update requires prior installation of the Project Server 2013 cumulative update (2768001). If the 2768001 update is not installed on affected systems, the 2737969 update will not be offered. For more information about the Project Server 2013 cumulative update, including download links, see Microsoft Knowledge Base Article 2768001. I have downloaded the update from the Microsoft Download Center; why does the installation fail? Download Center installations of the 2737969 update fails on systems that have not yet had the Project Server 2013 cumulative update (2768001) applied. Users should apply the 2768001 update before installing the 2737969 update. For more information about the Project Server 2013 cumulative update, including download links, see Microsoft Knowledge Base Article 2768001. Why is the 2768001 update for Project Server a prerequisite for this update? The 2768001 update for Project Server is a prerequisite for this update (2737969) due to a package configuration change that was introduced after Project Server 2013 was released. You must install Project Server update 2768001 before installing later SharePoint Server 2013 updates.. Incorrect Access Rights Information Disclosure Vulnerability - CVE-2013-1290 An information disclosure vulnerability exists in the way that SharePoint Server enforces access controls on specific SharePoint Lists. To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2013-1290. Mitigating Factors Mitigation refers to a setting, common configuration, or general best-practice, existing in a default state, that could reduce the severity of exploitation of a vulnerability. The following mitigating factors may be helpful in your situation: - An attacker must have valid Active Directory credentials before validation as a SharePoint user, and subsequent access to other users' files, could be possible. - The "Everyone" group used in assigning sharing permissions in Windows does not include "Anonymous users". - The attack vector for this vulnerability is established through new My Sites that have been created using the legacy user interface mode in SharePoint Server 2013 installations that were upgraded from SharePoint Server 2010. New My Sites created with clean installations of SharePoint Server 2013 are not subject to exploitation SharePoint Server 2013, set the permissions for users' personal document libraries to explicitly deny access to "NT Authenticated\All users" and set the permissions on each personal library to "Stop Inheriting Permissions". For more information, see Edit permissions for a list, library, or individual item. What is the scope of the vulnerability? This is an information disclosure vulnerability. An attacker who successfully exploited the vulnerability could gain access to documents to which the attacker would not otherwise have access. What causes the vulnerability? The vulnerability is caused by the way that SharePoint, by default, applies access controls to a SharePoint list. What might an attacker use the vulnerability to do? An attacker who successfully exploited this vulnerability could gain access to list items in a SharePoint list that the list owner did not intend for the attacker to be able to access. How could an attacker exploit the vulnerability? To exploit this vulnerability, an attacker would need to know the address or location of a specific SharePoint list to access the list's items. In order to gain access to the SharePoint site where the list is maintained, the attacker would need to be able to satisfy the SharePoint site's authentication requests. What systems are primarily at risk from the vulnerability? Systems that are running an affected version of SharePoint Server are primarily at risk. What does the update do? The update addresses the vulnerability by correcting the default access controls applied to the SharePoint list. When this security bulletin was issued, had this vulnerability been publicly disclosed? Yes. This vulnerability has been publicly disclosed. It has been assigned Common Vulnerability and Exposure number CVE-2013-1290.: SharePoint Server 2013
https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2013/ms13-030
2018-11-13T01:04:36
CC-MAIN-2018-47
1542039741176.4
[]
docs.microsoft.com
Returns a Table Array of all the Teams in the Group. When this function has no arguments it only updates when the Project is opened or a Specification is started or opened. The overload has one argument whose value needs to change in order for the function to be recalculated. SppGetAllTeams([Recalculation Trigger]) Where: Recalculation Trigger (optional) is the trigger to force a recalculation during the running of a Specification. See How To: Force a data refresh when data has changed for one example of implementing a trigger.
http://docs.driveworkspro.com/Topic/SppGetAllTeams
2018-11-13T01:38:28
CC-MAIN-2018-47
1542039741176.4
[]
docs.driveworkspro.com
Manually deploying agents - tarball Install agents on nodes running DataStax Enterprise clusters. - --user dsa_email_address:password\ Use the -fflag to run in the foreground.
https://docs.datastax.com/en/opscenter/6.1/opsc/install/opsc-agentInstallManual_t.html
2018-11-13T00:38:29
CC-MAIN-2018-47
1542039741176.4
[]
docs.datastax.com
Bug Check 0x117: VIDEO_TDR_TIMEOUT_DETECTED The VIDEO_TDR_TIMEOUT_DETECTED bug check has a value of 0x00000117. This indicates that the display driver failed to respond in a timely fashion. Important This topic is for programmers. If you are a customer who has received a blue screen error code while using your computer, see Troubleshoot blue screen errors. VIDEO_TDR_TIMEOUT_DETECTED Parameters. 3: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* VIDEO_TDR_TIMEOUT_DETECTED (117) The display driver failed to respond in timely fashion. (This code can never be used for a real bugcheck.) Arguments: Arg1: 8975d500, Optional pointer to internal TDR recovery context (TDR_RECOVERY_CONTEXT). Arg2: 9a02381e, The pointer into responsible device driver module (e.g owner tag). Arg3: 00000000, The secondary driver specific bucketing key. Arg4: 00000000, Optional internal context dependent data. ... Also displayed will be the faulting module name MODULE_NAME: atikmpag IMAGE_NAME: atikmpag.sys You can use the lmv command to display information about the faulting driver, including the timestamp. 3: kd> lmvm atikmpag Browse full module list start end module name 9a01a000 9a09a000 atikmpag T (no symbols) Loaded symbol image file: atikmpag.sys Image path: atikmpag.sys Image name: atikmpag.sys Browse all global symbols functions data Timestamp: Fri Dec 6 12:20:32 2013 (52A23190) CheckSum: 0007E58A ImageSize: 00080000 Translations: 0000.04b0 0000.04e4 0409.04b0 0409.04e4 Parameter 1 contains a pointer to the TDR_RECOVERY_CONTEXT. 3: kd> dt dxgkrnl!_TDR_RECOVERY_CONTEXT fffffa8010041010 +0x000 Signature : ?? +0x004 pState : ???? +0x008 TimeoutReason : ?? +0x010 Tick : _ULARGE_INTEGER +0x018 pAdapter : ???? +0x01c pVidSchContext : ???? +0x020 GPUTimeoutData : _TDR_RECOVERY_GPU_DATA +0x038 CrtcTimeoutData : _TDR_RECOVERY_CONTEXT::<unnamed-type-CrtcTimeoutData> +0x040 DbgOwnerTag : ?? +0x048 PrivateDbgInfo : _TDR_DEBUG_REPORT_PRIVATE_INFO +0xae0 pDbgReport : ???? +0xae4 pDbgBuffer : ???? +0xae8 DbgBufferSize : ?? +0xaec pDumpBufferHelper : ???? +0xaf0 pDbgInfoExtension : ???? +0xaf4 pDbgBufferUpdatePrivateInfo : ???? +0xaf8 ReferenceCount : ?? Memory read error 10041b08 Parameter 2 contains a pointer into the responsible device driver module (for example, the owner tag). BUGCHECK_P2: ffffffff9a02381e You may wish to examine the stack trace using the k, kb, kc, kd, kp, kP, kv (Display Stack Backtrace) command. 3: kd> k # ChildEBP RetAddr 00 81d9ace0 976e605e dxgkrnl!TdrUpdateDbgReport+0x93 [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\core\dxgtdr.cxx @ 944] 01 81d9acfc 976ddead dxgkrnl!TdrCollectDbgInfoStage2+0x195 [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\core\dxgtdr.cxx @ 1759] 02 81d9ad24 976e664f dxgkrnl!DXGADAPTER::Reset+0x23f [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\core\adapter.cxx @ 14972] 03 81d9ad3c 977be9e0 dxgkrnl!TdrResetFromTimeout+0x16 [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\core\dxgtdr.cxx @ 2465] 04 81d9ad50 977b7518 dxgmms1!VidSchiRecoverFromTDR+0x13 [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\dxgmms1\vidsch\vidscher.cxx @ 1018] 05 (Inline) -------- dxgmms1!VidSchiRun_PriorityTable+0xfa71 06 81d9ad70 812c01d4 dxgmms1!VidSchiWorkerThread+0xfaf2 [d:\blue_gdr\windows\core\dxkernel\dxgkrnl\dxgmms1\vidsch\vidschi.cxx @ 424] 07 81d9adb0 81325fb1 nt!PspSystemThreadStartup+0x58 [d:\blue_gdr\minkernel\ntos\ps\psexec.c @ 5884] 08 81d9adbc 00000000 nt!KiThreadStartup+0x15 [d:\blue_gdr\minkernel\ntos\ke\i386\threadbg.asm @ 81] You can also set a breakpoint in the code leading up to this stop code and attempt to single step forward into the faulting code, if you can consistently reproduce the stop..
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x117---video-tdr-timeout-detected
2018-11-13T00:10:49
CC-MAIN-2018-47
1542039741176.4
[]
docs.microsoft.com
Denys Shabalin EXPERIMENTAL AST manipulation in macros and compiler plugins Quasiquotes were designed primary as tool for ast manipulation in macros. A common workflow is to deconstruct arguments with quasiquote patterns and then construct a rewritten result with another quasiquote: // macro that prints the expression code before executing it object debug { def apply[T](x: => T): T = macro impl def impl(c: Context)(x: c.Tree) = { import c.universe._ val q"..$stats" = x val loggedStats = stats.flatMap { stat => val msg = "executing " + showCode(stat) List(q"println($msg)", stat) } q"..$loggedStats" } } // usage object Test extends App { def faulty: Int = throw new Exception debug { val x = 1 val y = x + faulty x + y } } // output executing val x: Int = 1 executing val y: Int = x.+(Test.this.faulty) java.lang.Exception ... To simplify integration with macros we’ve also made it easier to simply use trees in macro implementations instead of the reify-centric Expr api that might be used previously: // 2.10 object Macro { def apply(x: Int): Int = macro impl def impl(c: Context)(x: c.Expr[Int]): c.Expr[Int] = { import c.universe._ c.Expr(q"$x + 1") } } // in 2.11 you can also do it like that object Macro { def apply(x: Int): Int = macro impl def impl(c: Context)(x: c.Tree) = { import c.universe._ q"$x + 1" } } You no longer need to wrap the return value of a macro with c.Expr, or to specify the argument types twice, and the return type in impl is now optional. Quasiquotes can also be used “as is” in compiler plugins as the reflection API is strict subset of the compiler’s Global API. Just in time compilation Thanks to the ToolBox API, one can generate, compile and run Scala code at runtime: scala> val code = q"""println("compiled and run at runtime!")""" scala> val compiledCode = toolbox.compile(code) scala> val result = compiledCode() compiled and run at runtime! result: Any = () Offline code generation Thanks to the new showCode “pretty printer” one can implement an offline code generator that does AST manipulation with the help of quasiquotes, and then serializes that into actual source code right before writing it to disk: object OfflineCodeGen extends App { def generateCode() = q"package mypackage { class MyClass }" def saveToFile(path: String, code: Tree) = { val writer = new java.io.PrintWriter(path) try writer.write(showCode(code)) finally writer.close() } saveToFile("myfile.scala", generateCode()) }
https://docs.scala-lang.org/overviews/quasiquotes/usecases.html
2018-11-13T00:37:58
CC-MAIN-2018-47
1542039741176.4
[]
docs.scala-lang.org
This page lists all software products that have product life cycle date patterns included in TKU 2009-Apr-1. The list comprises of data for: You can use Tideway Foundation and OS Support Date data to: A sub set comprising of data for Microsoft Windows and all Linux operating systems is shipped with the Tideway Foundation 7.2 Community Edition. Customers can access the latest TKU-SupportDetails releases from the Tideway Downloads site, click Download Area then Latest TKU. Installation instructions are available here. An OS Product. Note: AIX 5.3 and AIX 6.1 will show as No Data in Tideway Foundation as IBM have currently not published end of support dates for these releases. Note: Solaris 9 and Solaris 10 will show as No Data in Tideway Foundation as Sun have currently not published end of support dates for these releases.
https://docs.bmc.com/docs/display/Configipedia/OS+Product+Life+Cycle+Patterns+-+TKU+2009-Apr-1
2020-07-02T23:15:09
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Rusqlite Rusqlite is an ergonomic wrapper for using SQLite from Rust. It attempts to expose an interface similar to rust-postgres. use rusqlite::{params, Connection, Result}; #[derive(Debug)] struct Person { id: i32, name: String, data: Option<Vec<u8>>, } fn main() -> Result<()> { let conn = Connection::open_in_memory()?; conn.execute( "CREATE TABLE person ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, data BLOB )", params![], )?; let me = Person { id: 0, name: "Steven".to_string(), data: None, }; conn.execute( "INSERT INTO person (name, data) VALUES (?1, ?2)", params![me.name, me.data], )?; let mut stmt = conn.prepare("SELECT id, name, data FROM person")?; let person_iter = stmt.query_map(params![], |row| { Ok(Person { id: row.get(0)?, name: row.get(1)?, data: row.get(2)?, }) })?; for person in person_iter { println!("Found person {:?}", person.unwrap()); } Ok(()) } cc crate to compile SQLite from source and link against that. This source is embedded in the libsqlite3-syscrate and is currently SQLite 3.30.1 (as of rusqlite0.21.0 / libsqlite3-sys0.17.0). This is probably the simplest solution to any build problems. You can enable this by adding the following in your Cargo.tomlfile: [dependencies.rusqlite] version = "0.21, which must be enabled by setting VCPKGRS_DYNAMIC=1environment variable before build.. If you enable the modern_sqlite feature, we'll use the bindings we would have included with the bundled build. You generally should have buildtime_bindgen enabled if you turn this on, as otherwise you'll need to keep the version of SQLite you link with in sync with what rusqlite would have bundled, (usually the most recent release of sqlite). Failing to do this will cause a runtime error. Author John Gallagher, [email protected] License Rusqlite is available under the MIT license. See the LICENSE file for more info.
https://docs.rs/crate/rusqlite/0.22.0-beta.0
2020-07-02T22:13:04
CC-MAIN-2020-29
1593655880243.25
[]
docs.rs
Intelligence Center The Intelligence Center is a Threat Intelligence knowledge base that is being constantly updated by SEKOIA analysts. It is meant to store all levels of Cyber Threat Intelligence (CTI), from strategic (targets, motivations) to technical (indicator of compromises). We recommend you learn about our data model before learning how you can leverage the API.
https://docs.sekoia.io/intelligence_center/
2020-07-02T22:49:50
CC-MAIN-2020-29
1593655880243.25
[array(['/assets/intelligence_center/maze_graph.png', 'SEKOIA.IO Intelligence Center Screenshot'], dtype=object)]
docs.sekoia.io
Configuring the BEM adapter You configure an adapter in Grid Manager. The configuration provides information about how the adapter interacts with BMC Event Manager (BEM). XML sample into the UI when configuring an adapter. You can switch to the XML view to configure those elements and attributes that are not available as fields or to configure all the elements and attributes using XML only. However, after you switch to the XML view and save the configuration in the XML from that view, you cannot thereafter use the form view for modifying that configuration. Note The default name for the actor adapter is BMCEventManager. This topic describes the properties to configure the BMC Event Manager adapter: To configure the actor adapter, monitor adapter, or both - Log on to the BMC Atrium Orchestrator Grid Manager. - Access the adapters page by clicking the Manage tab; then click the Adapters tab. - In the Adapters in Repository list: - Select the ro-adapter-bmc-event-manager-actorcheck box to configure the actor adapter. - Select the ro-adapter-bmc-event-manager-monitorcheck box to configure the monitor adapter. - Select both check boxes to configure both adapters. - Click Add to Grid to include the adapter in the Adapters on Grid list. - Click Configure corresponding to the newly added adapter. - On the Add an Adapter Configuration page, perform the following substeps to configure the adapter using the form view or skip to step 7 to configure the adapter using the XML view: Enter a name for the adapter. Note The Name field does not support single-quote (') and ampersand (&) characters. - Enter a description for the adapter. - Under Properties, enter or select values for the configuration elements. Include all required elements, indicated with an asterisk (*). - (optional) Configure the adapter in the XML view using the following substeps: - Enter a name and a description for the adapter. - Click Switch to XML View. - On the Warning message that appears, click Switch View. Copy the configuration elements and attributes from the XML sample node elements required for configuring the actor adapter: The following figure shows an XML sample for configuring the actor adapter. XML sample for configuring the actor adapter <config> <mcell-dir-file-path>/xyz/abc/lmn/mcell.dir</mcell-dir-file-path> <user-name>someuser</user-name> </config> The following table describes the node elements required for configuring the monitor adapter: Note The configuration information that you specify for the monitor adapter must match that specified in the incomm.conf and mcell.dir files located on the BEM server. The following figure shows an XML sample for configuring the monitor adapter. XML sample for configuring the monitor adapter <config> <port>1859</port> <gateway-name>BEMGW</gateway-name> <encryption-key>mc</encryption-key> <.:
https://docs.bmc.com/docs/AtriumOrchestratorContent/201503/configuring-the-bem-adapter-614649927.html
2020-07-02T23:22:27
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Management port for password operations When you add Windows systems to the Privileged Access Service, the Add System wizard scans for available ports to determine the port to use for password-related operations. The management port is also used to change, update, or rotate the managed account password on the target system. Depending on the results of the scan, the protocol and port used to validate and manage password changes might be set to one of the following: Remote Procedure Call (RPC) protocol over TCP and port 135. Server Message Block (SMB) protocol and port 445. Windows Remote Management (WinRM) over HTTPS if port 5986 is open Windows Remote Management (WinRM) over HTTP if port 5985 is open. If a suitable protocol and port cannot be found or the user account to be used for password management and validation does not have the appropriate permissions, the management mode for the system is automatically set to Disabled/Not Detected. Depending on the protocol you want to use for password management and validation, you might need to unblock a port or set up a proxy user and password with administrative privileges to run PowerShell commands, then retry automatic detection. You can use System Settings after adding a system to manually set a management protocol and port or to select Auto-Detect to try to detect an appropriate port again if the first attempt failed. If managing passwords using Remote Procedure Call (RPC) protocol over TCP and port 135, you must enable the default Netlogon Service Authz (RPC) firewall rule on the Windows system or create a firewall rule to open port 49152-65535 (TCP Dynamic) for inbound RPC endpoint connections. For more information about port assignments and flow for password operations, see Communication for password-related activity.
https://docs.centrify.com/Content/Infrastructure/resources-add/svr-mgr-pwd-operations.htm
2020-07-02T21:28:38
CC-MAIN-2020-29
1593655880243.25
[]
docs.centrify.com
Known issues with BHM v2 This post is used to list the known issues reported in the last version of BizTalk Health Monitor (currently the v2) and when possible workarounds. You can let in this post your comments if you want to report new issues. We will update periodically BHM to fix them and bring also new features. Known issues: When a user account is specified to schedule a BHM collect, the creation of the task returns the following error: "A specified logon session does not exist. It may already have been terminated". The policy "Do not allow storage of passwords and credentials for network authentication" is maybe enabled preventing to create a task under a specific user account. If it is a local policy and if you can disable it, follow these steps : - Open the 'Local Security Policies' MMC - Expand 'Local Policies' and select 'Security Options' - Disable that property - the system must reboot for the change to take effect BHM on a localized OS is hanging and crashing when creating a new monitoring profile. The root cause was identified and fixed. We will release soon a post V2 update fixing that problem. Users who want a temporary fix can contact JP ([email protected]). BHM on a localized OS is displaying an error message when creating the collect scheduled task even if the task is well created. The root cause was identified and fixed. We will release soon a post V2 update fixing that problem. Users who want a temporary fix can contact JP ([email protected]). BHM on a localized OS is not displaying the performance counters in the interactive performance view. Performance counters are localized and BHM provides only a list of US counters for the moment; they are however pending localization for the main languages. To workaround that issue, you can add manually these counters in the performance view reusing the ones added on a US server. When we create a monitoring profile targeting a BizTalk group not accessible by the logged-on user, the console is crashing. During the creation of a profile, BHM connects to the Mgmt db of the targeted group to retrieve the list of BizTalk servers of that group - this list will be used in the performance nodes. This connection attempt made under the interactive logged-on user account is failing because this user does not have rights on the Mgmt Db, and this error is not well catched. We will release soon a post V2 update allowing to specify the user account in the new profile dialogbox and managing better connection errors. You should be able workaround that issue by editing manually an XML profile file following the steps below : - Duplicate an existing profile using the new "duplicate profile" menu item - open a CMD window under elevated admin priviledge - in this cmd change the folder to "c:\programdata\Microsoft\BizTalk Health Monitor" - open in notepad the XML file corresponding to this duplicated profile - modify in this xml file the value of MGMTDBSERVER and MGMTDBNAME properties to specify the location of the new Mgmt db you want to target - modify the value of the "DEFAULTBTS" property entering the name of one BizTalk server of that new targeted group - modify the value of the "ALLBTS" property entering this same BizTalk server or the complete list of all your BTS of the new targeted group, ex : Value="BTSRV1:True:True:True, BTSRV2:False:False:True" - save the xml file - Open BHM console : you should see then the new profile listed - Display the Settings dialogbox of the new profile and specify the user to run the collect as There is no SSL usage option in the mail notifications settings An option to use SSL will be added in the next public version of BHM; in the meantime users who want a temporary build implementing that option can contact JP ([email protected]). Error when creating a scheduled analyze task if the username or password contains some special chars like '&' This issue is fixed in the next version which will be available very soon. To workaround that issue, you can avoid specifying the user name and password when enabling the task in BHM, and then edit manually the created BHM task in the Task Scheduler application to run the task under a specific user account.
https://docs.microsoft.com/en-us/archive/blogs/biztalkhealthmonitor/known-issues-with-bhm-v2
2020-07-02T23:07:47
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
Framework Content Element. Try Find Resource(Object) Method. If no resource was found, null is returned..
https://docs.microsoft.com/en-us/dotnet/api/system.windows.frameworkcontentelement.tryfindresource?view=netframework-4.8
2020-07-02T23:09:56
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
ISC DHCP Overview ISC DHCP offers a complete open source solution for implementing DHCP servers. Setup This setup guide will show you how to forward logs produced by your DHCP servers DHCP configuration file for rsyslog: Paste the following rsyslog configuration to trigger the emission of DHCP logs by your rsyslog server to SEKOIA.IO: In the above template instruction, replace YOUR_INTAKE_KEY variable with your intake key. 3. Restart rsyslog 4. Enjoy your events Go to the events page to watch your incoming events. Related files - SEKOIA-IO-intake.pem: SEKOIA.IO TLS Server Certificate (1674b)
https://docs.sekoia.io/integrations/dhcpd/
2020-07-02T23:11:22
CC-MAIN-2020-29
1593655880243.25
[]
docs.sekoia.io
Assert a condition and logs an error message to the Unity console on failure. Message of a type of LogType.Assert is logged. Note that these methods work only if UNITY_ASSERTIONS symbol is defined. This means that if you are building assemblies externally, you need to define this symbol otherwise the call becomes a no-op. (For more details see System.Diagnostics.ConditionalAttribute on the MSDN website.
https://docs.unity3d.com/ScriptReference/Debug.Assert.html
2020-07-02T23:17:04
CC-MAIN-2020-29
1593655880243.25
[]
docs.unity3d.com
Review and Authorize stage - Risk and impact analysis. You also specify the Class of the change on the Change form. Selecting these values automatically determines the Risk Level and Lead Time for the change. For more information and instructions, see the following topics:
https://docs.bmc.com/docs/change1808/review-and-authorize-stage-risk-and-impact-analysis-821050065.html
2020-07-02T23:16:22
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com