content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Delete a cluster pair Contributors Download PDF of this page You can delete a cluster pair from the Element UI of either of the clusters in the pair. Click Data Protection > Cluster Pairs. Click the Actions icon for a cluster pair. In the resulting menu, click Delete. Confirm the action. Perform the steps again from the second cluster in the cluster pairing.
https://docs.netapp.com/us-en/element-software/storage/task_replication_delete_cluster_pair.html
2021-07-24T07:10:43
CC-MAIN-2021-31
1627046150134.86
[]
docs.netapp.com
Whoosh 1.x release notes¶ Whoosh 1.8.3¶ Whoosh 1.8.3 contains important bugfixes and new functionality. Thanks to all the mailing list and BitBucket users who helped with the fixes! Fixed a bad Collector bug where the docset of a Results object did not match the actual results. You can now pass a sequence of objects to a keyword argument in add_document and update_document (currently this will not work for unique fields in update_document). This is useful for non-text fields such as DATETIME and NUMERIC, allowing you to index multiple dates/numbers for a document: writer.add_document(shoe=u"Saucony Kinvara", sizes=[10.0, 9.5, 12]) This version reverts to using the CDB hash function for hash files instead of Python’s hash() because the latter is not meant to be stored externally. This change maintains backwards compatibility with old files. The Searcher.search method now takes a mask keyword argument. This is the opposite of the filter argument. Where the filter specifies the set of documents that can appear in the results, the mask specifies a set of documents that must not appear in the results. Fixed performance problems in Searcher.more_like. This method now also takes a filter keyword argument like Searcher.search. Improved documentation. Whoosh 1.8.2¶ Whoosh 1.8.2 fixes some bugs, including a mistyped signature in Searcher.more_like and a bad bug in Collector that could screw up the ordering of results given certain parameters. Whoosh 1.8.1¶ Whoosh 1.8.1 includes a few recent bugfixes/improvements: - ListMatcher.skip_to_quality() wasn’t returning an integer, resulting in a “None + int” error. - Fixed locking and memcache sync bugs in the Google App Engine storage object. - MultifieldPlugin wasn’t working correctly with groups. - The binary matcher trees of Or and And are now generated using a Huffman-like algorithm instead perfectly balanced. This gives a noticeable speed improvement because less information has to be passed up/down the tree. Whoosh 1.8¶ This release relicensed the Whoosh source code under the Simplified BSD (A.K.A. “two-clause” or “FreeBSD”) license. See LICENSE.txt for more information. Whoosh 1.7.7¶ Setting a TEXT field to store term vectors is now much easier. Instead of having to pass an instantiated whoosh.formats.Format object to the vector= keyword argument, you can pass True to automatically use the same format and analyzer as the inverted index. Alternatively, you can pass a Format subclass and Whoosh will instantiate it for you. For example, to store term vectors using the same settings as the inverted index (Positions format and StandardAnalyzer): from whoosh.fields import Schema, TEXT schema = Schema(content=TEXT(vector=True)) To store term vectors that use the same analyzer as the inverted index (StandardAnalyzer by default) but only store term frequency: from whoosh.formats import Frequency schema = Schema(content=TEXT(vector=Frequency)) Note that currently the only place term vectors are used in Whoosh is keyword extraction/more like this, but they can be useful for expert users with custom code. Added whoosh.searching.Searcher.more_like() and whoosh.searching.Hit.more_like_this() methods, as shortcuts for doing keyword extraction yourself. Return a Results object. “python setup.py test” works again, as long as you have nose installed. The whoosh.searching.Searcher.sort_query_using() method lets you sort documents matching a given query using an arbitrary function. Note that like “complex” searching with the Sorter object, this can be slow on large multi-segment indexes. Whoosh 1.7¶ You can once again perform complex sorting of search results (that is, a sort with some fields ascending and some fields descending). You can still use the sortedby keyword argument to whoosh.searching.Searcher.search() to do a simple sort (where all fields are sorted in the same direction), or you can use the new Sorter class to do a simple or complex sort: searcher = myindex.searcher() sorter = searcher.sorter() # Sort first by the group field, ascending sorter.add_field("group") # Then by the price field, descending sorter.add_field("price", reverse=True) # Get the Results results = sorter.sort_query(myquery) See the documentation for the Sorter class for more information. Bear in mind that complex sorts will be much slower on large indexes because they can’t use the per-segment field caches. You can now get highlighted snippets for a hit automatically using whoosh.searching.Hit.highlights(): results = searcher.search(myquery, limit=20) for hit in results: print hit["title"] print hit.highlights("content") See whoosh.searching.Hit.highlights() for more information. Added the ability to filter search results so that only hits in a Results set, a set of docnums, or matching a query are returned. The filter is cached on the searcher. # Search within previous results newresults = searcher.search(newquery, filter=oldresults) # Search within the “basics” chapter results = searcher.search(userquery, filter=query.Term(“chapter”, “basics”)) You can now specify a time limit for a search. If the search does not finish in the given time, a whoosh.searching.TimeLimit exception is raised, but you can still retrieve the partial results from the collector. See the timelimit and greedy arguments in the whoosh.searching.Collector documentation. Added back the ability to set whoosh.analysis.StemFilter to use an unlimited cache. This is useful for one-shot batch indexing (see Tips for speeding up batch indexing). The normalize() method of the And and Or queries now merges overlapping range queries for more efficient queries. Query objects now have __hash__ methods allowing them to be used as dictionary keys. The API of the highlight module has changed slightly. Most of the functions in the module have been converted to classes. However, most old code should still work. The NullFragmeter is now called WholeFragmenter, but the old name is still available as an alias. Fixed MultiPool so it won’t fill up the temp directory with job files. Fixed a bug where Phrase query objects did not use their boost factor. Fixed a bug where a fieldname after an open parenthesis wasn’t parsed correctly. The change alters the semantics of certain parsing “corner cases” (such as a:b:c:d). Whoosh 1.6¶ The whoosh.writing.BatchWriter class is now called whoosh.writing.BufferedWriter. It is similar to the old BatchWriter class but allows you to search and update the buffered documents as well as the documents that have been flushed to disk: writer = writing.BufferedWriter(myindex) # You can update (replace) documents in RAM without having to commit them # to disk writer.add_document(path="/a", text="Hi there") writer.update_document(path="/a", text="Hello there") # Search committed and uncommited documents by getting a searcher from the # writer instead of the index searcher = writer.searcher() (BatchWriter is still available as an alias for backwards compatibility.) The whoosh.qparser.QueryParser initialization method now requires a schema as the second argument. Previously the default was to create a QueryParser without a schema, which was confusing: qp = qparser.QueryParser("content", myindex.schema) The whoosh.searching.Searcher.search() method now takes a scored keyword. If you search with scored=False, the results will be in “natural” order (the order the documents were added to the index). This is useful when you don’t need scored results but want the convenience of the Results object. Added the whoosh.qparser.GtLtPlugin parser plugin to allow greater than/less as an alternative syntax for ranges: count:>100 tag:<=zebra date:>='29 march 2001' Added the ability to define schemas declaratively, similar to Django models: from whoosh import index from whoosh.fields import SchemaClass, ID, KEYWORD, STORED, TEXT class MySchema(SchemaClass): uuid = ID(stored=True, unique=True) path = STORED tags = KEYWORD(stored=True) content = TEXT index.create_in("indexdir", MySchema) Whoosh 1.6.2: Added whoosh.searching.TermTrackingCollector which tracks which part of the query matched which documents in the final results. Replaced the unbounded cache in whoosh.analysis.StemFilter with a bounded LRU (least recently used) cache. This will make stemming analysis slightly slower but prevent it from eating up too much memory over time. Added a simple whoosh.analysis.PyStemmerFilter that works when the py-stemmer library is installed: ana = RegexTokenizer() | PyStemmerFilter("spanish") The estimation of memory usage for the limitmb keyword argument to FileIndex.writer() is more accurate, which should help keep memory usage memory usage by the sorting pool closer to the limit. The whoosh.ramdb package was removed and replaced with a single whoosh.ramindex module. Miscellaneous bug fixes. Whoosh 1.5¶ Note Whoosh 1.5 is incompatible with previous indexes. You must recreate existing indexes with Whoosh 1.5. Fixed a bug where postings were not portable across different endian platforms. New generalized field cache system, using per-reader caches, for much faster sorting and faceting of search results, as well as much faster multi-term (e.g. prefix and wildcard) and range queries, especially for large indexes and/or indexes with multiple segments. Changed the faceting API. See Sorting and faceting. Faster storage and retrieval of posting values. Added per-field multitoken_query attribute to control how the query parser deals with a “term” that when analyzed generates multiple tokens. The default value is “first” which throws away all but the first token (the previous behavior). Other possible values are “and”, “or”, or “phrase”. Added whoosh.analysis.DoubleMetaphoneFilter, whoosh.analysis.SubstitutionFilter, and whoosh.analysis.ShingleFilter. Added whoosh.qparser.CopyFieldPlugin. Added whoosh.query.Otherwise. Generalized parsing of operators (such as OR, AND, NOT, etc.) in the query parser to make it easier to add new operators. In intend to add a better API for this in a future release. Switched NUMERIC and DATETIME fields to use more compact on-disk representations of numbers. Fixed a bug in the porter2 stemmer when stemming the string “y”. Added methods to whoosh.searching.Hit to make it more like a dict. Short posting lists (by default, single postings) are inline in the term file instead of written to the posting file for faster retrieval and a small saving in disk space. Whoosh 1.3¶ Whoosh 1.3 adds a more efficient DATETIME field based on the new tiered NUMERIC field, and the DateParserPlugin. See Indexing and parsing dates/times. Whoosh 1.2¶ Whoosh 1.2 adds tiered indexing for NUMERIC fields, resulting in much faster range queries on numeric fields. Whoosh 1.0¶ Whoosh 1.0 is a major milestone release with vastly improved performance and several useful new features. The index format of this version is not compatibile with indexes created by previous versions of Whoosh. You will need to reindex your data to use this version. Orders of magnitude faster searches for common terms. Whoosh now uses optimizations similar to those in Xapian to skip reading low-scoring postings. Faster indexing and ability to use multiple processors (via multiprocessing module) to speed up indexing. Flexible Schema: you can now add and remove fields in an index with the whoosh.writing.IndexWriter.add_field() and whoosh.writing.IndexWriter.remove_field() methods. New hand-written query parser based on plug-ins. Less brittle, more robust, more flexible, and easier to fix/improve than the old pyparsing-based parser. On-disk formats now use 64-bit disk pointers allowing files larger than 4 GB. New whoosh.searching.Facets class efficiently sorts results into facets based on any criteria that can be expressed as queries, for example tags or price ranges. New whoosh.writing.BatchWriter class automatically batches up individual add_document and/or delete_document calls until a certain number of calls or a certain amount of time passes, then commits them all at once. New whoosh.analysis.BiWordFilter lets you create bi-word indexed fields a possible alternative to phrase searching. Fixed bug where files could be deleted before a reader could open them in threaded situations. New whoosh.analysis.NgramFilter filter, whoosh.analysis.NgramWordAnalyzer analyzer, and whoosh.fields.NGRAMWORDS field type allow producing n-grams from tokenized text. Errors in query parsing now raise a specific whoosh.qparse.QueryParserError exception instead of a generic exception. Previously, the query string * was optimized to a whoosh.query.Every query which matched every document. Now the Every query only matches documents that actually have an indexed term from the given field, to better match the intuitive sense of what a query string like tag:* should do. New whoosh.searching.Searcher.key_terms_from_text() method lets you extract key words from arbitrary text instead of documents in the index. Previously the whoosh.searching.Searcher.key_terms() and whoosh.searching.Results.key_terms() methods required that the given field store term vectors. They now also work if the given field is stored instead. They will analyze the stored string into a term vector on-the-fly. The field must still be indexed. User API changes¶ The default for the limit keyword argument to whoosh.searching.Searcher.search() is now 10. To return all results in a single Results object, use limit=None. The Index object no longer represents a snapshot of the index at the time the object was instantiated. Instead it always represents the index in the abstract. Searcher and IndexReader objects obtained from the Index object still represent the index as it was at the time they were created. Because the Index object no longer represents the index at a specific version, several methods such as up_to_date and refresh were removed from its interface. The Searcher object now has last_modified(), up_to_date(), and refresh() methods similar to those that used to be on Index. The document deletion and field add/remove methods on the Index object now create a writer behind the scenes to accomplish each call. This means they write to the index immediately, so you don’t need to call commit on the Index. Also, it will be much faster if you need to call them multiple times to create your own writer instead: # Don't do this for id in my_list_of_ids_to_delete: myindex.delete_by_term("id", id) myindex.commit() # Instead do this writer = myindex.writer() for id in my_list_of_ids_to_delete: writer.delete_by_term("id", id) writer.commit() The postlimit argument to Index.writer() has been changed to postlimitmb and is now expressed in megabytes instead of bytes: writer = myindex.writer(postlimitmb=128) Instead of having to import whoosh.filedb.filewriting.NO_MERGE or whoosh.filedb.filewriting.OPTIMIZE to use as arguments to commit(), you can now simply do the following: # Do not merge segments writer.commit(merge=False) # or # Merge all segments writer.commit(optimize=True) The whoosh.postings module is gone. The whoosh.matching module contains classes for posting list readers. Whoosh no longer maps field names to numbers for internal use or writing to disk. Any low-level method that accepted field numbers now accept field names instead. Custom Weighting implementations that use the final() method must now set the use_final attribute to True: from whoosh.scoring import BM25F class MyWeighting(BM25F): use_final = True def final(searcher, docnum, score): return score + docnum * 10 This disables the new optimizations, forcing Whoosh to score every matching document. whoosh.writing.AsyncWriter now takes an whoosh.index.Index object as its first argument, not a callable. Also, the keyword arguments to pass to the index’s writer() method should now be passed as a dictionary using the writerargs keyword argument. Whoosh now stores per-document field length using an approximation rather than exactly. For low numbers the approximation is perfectly accurate, while high numbers will be approximated less accurately. The doc_field_length method on searchers and readers now takes a second argument representing the default to return if the given document and field do not have a length (i.e. the field is not scored or the field was not provided for the given document). The whoosh.analysis.StopFilter now has a maxsize argument as well as a minsize argument to its initializer. Analyzers that use the StopFilter have the maxsize argument in their initializers now also. The interface of whoosh.writing.AsyncWriter has changed.
https://docs.red-dove.com/whoosh/releases/1_0.html
2021-07-24T08:34:40
CC-MAIN-2021-31
1627046150134.86
[]
docs.red-dove.com
Edge Edge is an unstructured CFD solver developed at the Swedish Defence Research Agency Edge flow solver is based on a node-centered finite volume scheme. For steady flows, the equations are integrated towards steady state with an explicit multi-stage Runge-Kutta scheme. To accelerate convergence, residual smoothing and a multi-grid technique can be employed. Low Mach-number preconditioning is also available. Time-accurate computations are implemented using dual time-stepping: implicit time marching with explicit sub-iterations.. Contents Availability License License: No licensing information available. Experts No experts have currently registered expertise on this specific subject. List of registered field experts:
https://docs.snic.se/wiki/Edge
2021-07-24T08:49:22
CC-MAIN-2021-31
1627046150134.86
[]
docs.snic.se
Report on Funnel and Path This tutorial expands on templated funnels to show how you can create your own funnel and path reports. These reports are powerful tools for analyzing the behaviors of your users. Funnel and path are available. . Since most users are unable to complete a goal in a single session, our example funnel and path report shows goals achieved over a time range. Show Goals Achieved Over a Time Range - Go to Reports » Performance Analytics. - Select the Revenue dashboard. - For the Funnel report, click the ellipsis icon and select Explore From Here. - In Filters, edit the filters and values as desired, then click Run. See the Path of Users To see the path of users, e.g., abandoned cart, follow these steps after completing the steps in Show Goals Achieved Over a Time Range: Categories
https://docs.airship.com/guides/messaging/user-guide/data/analytics/funnel-path/
2021-07-24T07:34:30
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
You're viewing Apigee Edge documentation. View Apigee X documentation. This section describes version 4.50.00...50.00, use the following links: - New installations: - New installation overview - Existing installations: - Upgrade paths
https://docs.apigee.com/release/notes/45000-edge-private-cloud-release-notes?hl=pt-BR
2021-07-24T08:40:39
CC-MAIN-2021-31
1627046150134.86
[]
docs.apigee.com
Briostack Setup In order to authenticate the connector, you will simply need your username, password and account name. Your account name can be found in the address of your website, for example if your site is johnsmithltd.briostack.com, your account name would be johnsmithltd. Important Note The Briostack API requires a separate account for each installation of the connector. If you try to run two connectors authenticated with the same details, you may receive the following error - "Congratulations! It looks like your team is growing and you need more Brio Sales licenses. Reach out to Briostack at 801-623-5200 or [email protected] to add more." Sales Methods Within the Sales methods, you can select a category, to return only those sales which fall within it, or return all sales by leaving the field blank. Customers It is not currently possible to filter the customers by business sector, so for most use cases, “List Customers” would not be a practical method. As a result, Get Customer has been provided for returning individual customer records.Edit me
https://docs.cyclr.com/briostack-connector
2021-07-24T08:03:53
CC-MAIN-2021-31
1627046150134.86
[]
docs.cyclr.com
1 Introduction Text is a group of widgets that consists of Text, Paragraph, Headings (H1-H6), and the Page Title. They are used to display textual information to the end-user. For example, you can display a text paragraph: 2 Text, Paragraph, and Headings General Properties You can use Text, Paragraph, or Heading widgets to display a text to the end-user. In Properties > General, you can type the text that will be displayed, define if it contains attribute values, and set the render mode. 2.1 Content In Content, you define the text that will be shown. You can also add attributes, and the attribute value will be displayed to the user. For example, when the user logs in to the account, a greeting message can be shown, where Name and NumberOfMessages are attribute values: 2.1.1 Configuring Content Without Adding Attributes To configure Content without adding attributes, you can do one of the following: - Double-click the widget on the page and start typing the text you want to show to the end-user; press Enter to save changes - Open Properties of the widget, delete the default text in the General section > Content, and type the message you want to show to the end-user 2.1.2 Configuring Content and Adding Attributes To configure Content and add attributes to it, do the following: Place the widget (Text, Paragraph, or Heading) inside a data container (a list view or a data view) and set an entity for the list view/data view. For more information, see Data View & List View. This is necessary to allow attributes of the selected entity to be inserted into the text. Open Properties of the Text, Paragraph, or Heading, delete the default text in the General section > Content and start typing the message you want to show to the end-user. To insert attribute values into your message, click Add attribute or press Ctrl + Space. The list of attributes which can be inserted will be shown. Scroll through the list of attributes (you can also use Up and Down arrows for that) and select the attribute you want to add to the Text. Type the rest of the text, and insert more attributes if required, to finish your message. You have configured the Content of your widget. If you want to edit it, you can double click the widget in the page; the Edit Text pop-up dialog will be shown for widgets with attributes in their content. 2.2 Render Mode The render mode defines the way a text will be shown to the end-user. Basically, Text, Paragraph, and Heading widgets are different render modes of the same widget. Possible values of the render mode are described in the table below. 3 Page Title General Properties The page title widget sets the title of the current page and displays it. This title also appears as the page title in your browser tab. The title will be displayed in the H1 style of the Theme Customizer. For details, see Theme Customizer. If you want to change the name of the page, do the following: - Open Properties of the widget > the General section. - Change a name in the Title field. The page title is changed. The Title that you see in the page properties and in widget is one and the same. This means, if you make changes to the title in page properties, this change will be displayed in the widget, and vice versa. You can put several Title widgets on your page, but they will all display the same text and cannot be edited individually. 4 Conditional Visibility Section Conditional visibility allows you to hide a widget from a page unless the certain conditions are met. For information on the Conditional Visibility section and its properties, see Conditional Visibility Section. 5 Design Section For information on the Design section and its properties, see Design Section.
https://docs.mendix.com/studio8/page-editor-widgets-text
2021-07-24T07:58:31
CC-MAIN-2021-31
1627046150134.86
[array(['attachments/page-editor-widgets-text/paragraph-example.png', None], dtype=object) array(['attachments/page-editor-widgets-text/content-example.png', None], dtype=object) array(['attachments/page-editor-widgets-text/edit-text.png', None], dtype=object) array(['attachments/page-editor-widgets-text/page-title-interrelation.png', None], dtype=object) ]
docs.mendix.com
Data Grid View Auto Size Column Mode Enum Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here..
https://docs.microsoft.com/en-gb/dotnet/api/system.windows.forms.datagridviewautosizecolumnmode?view=net-5.0
2021-07-24T07:52:30
CC-MAIN-2021-31
1627046150134.86
[]
docs.microsoft.com
Setting the FontSize in C# when a Telerik theme is applied Environment Description How to set a different font size in C# for themes which support palettes. Solution When using any of the Telerik themes which support palettes (the ones with the links from the list), you can dynamically change the FontSize and FontFamily properties of all components in the application. Most controls use the theme Palette's FontSize property, yet all themes support a different number of various font sizes. For example, if the Office2016Theme is applied to your application, you can change the following font size properties of its palette: Example 1: Modifying the FontSize and FontFamily in the Office2016 theme Office2016Palette.Palette.FontSizeS = 10; Office2016Palette.Palette.FontSize = 12; Office2016Palette.Palette.FontSizeL = 14; Office2016Palette.Palette.FontFamily = new FontFamily("Segoe UI"); Example 2: Changing the default FontSize and FontFamily in the Office2016 theme on a button click private void OnButtonChangeFontSizeClick(object sender, RoutedEventArgs e) { Office2016Palette.Palette.FontSize = 14; Office2016Palette.Palette.FontFamily = new FontFamily("Calibri"); } Figure 1: Setting different FontSize and FontFamily The approach used in the above code snippets is applicable for the following themes - Windows8, Windows8Touch, Office2013, VisualStudio2013, Office2016, Office2016Touch, Green, Fluent, Material, Crystal and VisualStudio2019. If you are using a different theme, which does not have a palette, you can change the font size of the application through Application.Current.MainWindow.FontSize or apply it directly for the control you wish, e.g. this.dataGrid.FontSize = 20. See Also Available Themes Theme Helper Switching Themes at Runtime
https://docs.telerik.com/devtools/wpf/knowledge-base/kb-common-setting-fontsize-of-a-control-with-telerik-theme
2021-07-24T09:03:24
CC-MAIN-2021-31
1627046150134.86
[array(['../images/common-styling-appearance-office2016-theme-1.png', None], dtype=object) ]
docs.telerik.com
ClearContact The ClearContact method clears the information currently displayed on the Edit Contact Information GUI panel. It also closes curContact. Clicking Clear invokes ClearContact. The method does the following: Clears the Edit Contact Information text boxes. If curContact is not null it closes curContact and sets the reference to null. Here is the method. Add the body of the method to the ClearContact stub in PhoneFormObj.cs. private void ClearContact() { txtConId.Text = ""; txtConName.Text = ""; if (curContact != null) { curContact.Close(); curContact = null; } } Copy code to clipboard
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=TCMP_ClearContact
2021-07-24T01:09:41
CC-MAIN-2021-31
1627046150067.87
[]
docs.intersystems.com
strings describing the connected joysticks. This can be useful in user input configuration screens - this way, instead of showing labels like "Joystick 1", you can show more meaningful names like "Logitech WingMan". To read values from different joysticks, you need to assign respective axes for the number of joysticks you want to support in the Input Manager. The position of a joystick in this array corresponds to the joystick number, i.e. the name in position 0 of this array is for the joystick that feeds data into 'Joystick 1' in the Input Manager, the name in position 1 corresponds to 'Joystick 2', and so on. Note that some entries in the array may be blank if no device is connected for that joystick number. // Prints a joystick name if movement is detected. function Update () { // requires you to set up axes "Joy0X" - "Joy3X" and "Joy0Y" - "Joy3Y" in the Input Manger for (var i : int = 0; i < 4; i++) { if (Mathf.Abs(Input.GetAxis("Joy"+i+"X")) > 0.2 || Mathf.Abs(Input.GetAxis("Joy"+i+"Y")) > 0.2) Debug.Log (Input.GetJoystickNames()[i]+" is moved"); } } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Update() { int i = 0; while (i < 4) { if (Mathf.Abs(Input.GetAxis("Joy" + i + "X")) > 0.2F || Mathf.Abs(Input.GetAxis("Joy" + i + "Y")) > 0.2F) Debug.Log(Input.GetJoystickNames()[i] + " is moved"); i++; } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Input.GetJoystickNames.html
2021-07-24T02:56:34
CC-MAIN-2021-31
1627046150067.87
[]
docs.unity3d.com
Brekeke PBX basic Info is to create office telephony system and its Multi-Tenant edition provides hosted telephony service. Brekeke PBX provides trouble-free telephone systems for any organization. Supports both Windows and Linux platforms. To view a topic of your interest, please click on the topics listed in the left column.
https://docs.brekeke.com/pbx/
2021-07-24T02:06:24
CC-MAIN-2021-31
1627046150067.87
[]
docs.brekeke.com
6. Extension Settings Properties The tables below contain the property names that may be viewed or altered using the WebSocket methods. These lists are not comprehensive: 1. Callback Extension 2. Conference Extension 3. Groups Extension Simultaneous Ring Call Hunting 4. IVR Extension Auto Attendant Add/Remove Forwarding Destinations Switch Plan Script Flow 5. Schedule Extension 6. User extension - pln[n]_d_xxxx are properties’ names for the user extension [Inbound] page plan [n] Default Forwarding Schedule settings. - pln[n]_ptn[n]_xxxx are properties’ names for the user extension [Inbound] page plan [n] Forwarding Schedule [n] settings. - User Forwarding Schedule [n] properties’ names are the same as those in Default Forwarding Schedule settings, but with a different prefix. For the Forwarding Schedule [n] Conditions properties’ names, please check the Schedule Extension table.
https://docs.brekeke.com/pbx/extension-settings-properties
2021-07-24T01:05:38
CC-MAIN-2021-31
1627046150067.87
[]
docs.brekeke.com
Use this table to determine whether your Ambari and HDP stack versions are compatible. For more information about: Installing Accumulo, Flume, Hue, Knox, and Solr services, see Installing HDP Manually. HDP 2.0.6 stack (or later) patch releases, see HDP release notes, available at HDP Documentation. Deploying Ambari and the HDP Stack, see Deploying, Configuring, and Upgrading HDP.
https://docs.cloudera.com/HDPDocuments/Ambari-1.5.1.0/bk_using_Ambari_book/content/ambari-chap1-compatibility.html
2021-07-24T02:41:12
CC-MAIN-2021-31
1627046150067.87
[]
docs.cloudera.com
DeepFactor benefits developers in three key ways: With DeepFactor, developers get the visibility they need to ensure their releases are Runtime Ready. Developers can code faster, better and ship knowing that their release will deliver value to business without introducing runtime security, compliance and performance issues. Development leaders can be confident that their applications are secure, performing well and meeting customer expectations. Please sign in to leave a comment.
https://docs.deepfactor.io/hc/en-us/articles/360053671513-What-are-the-key-DeepFactor-benefits-for-developers-
2021-07-24T01:43:55
CC-MAIN-2021-31
1627046150067.87
[]
docs.deepfactor.io
Custom Activation Strategy Even though Unleash comes with a few powerful activation strategies there might be scenarios where you would like to extend Unleash with your own custom strategies. #Example: TimeStamp Strategy In this example we want to define an activation strategy offers a scheduled release of a feature toggle. This means that we want the feature toggle to be activated after a given date and time. #Define custom strategy First we need to "define" our new strategy. To add a new "Strategy", open the Strategies tab from the sidebar. We name our strategy TimeStamp and add one required parameter of type string, which we call enableAfter. #Use custom strategy After we have created the strategy definition, we can now decide to use that activation strategy for our feature toggle. In the example we want to use our custom strategy for the feature toggle named demo.TimeStampRollout. #Client implementation All official client SDK's for Unleash provides abstractions for you to implement support for custom strategies. Before you have provided support for the custom strategy; the client will return false, because it does not understand the activation strategy. In Node.js the implementation for the TimeStampStrategy would be: In the example implementation we make use of the library called moment to parse the timestamp and verify that current time is after the specified enabledAfter parameter. All parameter injected to the strategy are handled as string objects. This means that the strategies needs to parse it to a more suitable format. In this example we just parse it directly to a Date type and do the comparison directly. You might want to also consider timezone in a real implementation. We also have to remember to register the custom strategy when initializing the Unleash client. Full working code example:
https://docs.getunleash.io/advanced/custom_activation_strategy/
2021-07-24T00:27:49
CC-MAIN-2021-31
1627046150067.87
[array(['/assets/images/timestamp_create_strategy-42deccb20118c27e30fd1e3c63e02f94.png', 'timestamp_create_strategy'], dtype=object) array(['/assets/images/timestamp_use_strategy-acc98e0bf468d8519c5871c6027f3b51.png', 'timestamp_use_strategy'], dtype=object) ]
docs.getunleash.io
Ruby SDK You will need your API URLand your API tokenin order to connect the Client SDK to you Unleash instance. You can find this information in the “Admin” section Unleash management UI. Read more require 'unleash' @unleash = Unleash::Client.new( url: '<API url>', app_name: 'simple-test', custom_http_headers = {'Authorization': '<API token>'}, ) #Sample usage To evaluate a feature toggle, you can use: if @unleash.is_enabled? "AwesomeFeature", @unleash_context puts "AwesomeFeature is enabled" end If the feature is not found in the server, it will by default return false. However you can override that by setting the default return value to true: if @unleash.is_enabled? "AwesomeFeature", @unleash_context, true puts "AwesomeFeature is enabled by default" end Alternatively by using if_enabled you can send a code block to be executed as a parameter: @unleash.if_enabled "AwesomeFeature", @unleash_context, true do puts "AwesomeFeature is enabled by default" end #Variations If no variant is found in the server, use the fallback variant. fallback_variant = Unleash::Variant.new(name: 'default', enabled: true, payload: {"color" => "blue"}) variant = @unleash.get_variant "ColorVariants", @unleash_context, fallback_variant puts "variant color is: #{variant.payload.fetch('color')}" #Client methods Read more at github.com/Unleash/unleash-client-ruby
https://docs.getunleash.io/sdks/ruby_sdk/
2021-07-24T01:20:06
CC-MAIN-2021-31
1627046150067.87
[]
docs.getunleash.io
What is no code automation ? No-Code Automation is an approach to creating automation tests which allows you to test an application without writing a single piece code or script. The aim is to make the setup so easy to use so that automating a test scenario takes less time and require almost no coding efforts. Nocode automation removes the challenge of creating scripts/code to create automated tests. In addition, Sofy used AI to create resiliency in automation to avoid the challenges of automation scripts such as finding element locators, dynamic content changes, device form factor changes, etc.
https://docs.sofy.ai/mobile-automate/what-is-no-code-automation
2021-07-24T01:51:58
CC-MAIN-2021-31
1627046150067.87
[]
docs.sofy.ai
Create Product Checkouts You can create product checkouts from your landing pages, by integrating your Swipe Pages account with stripe.com. Follow the simple steps given below to complete the integration. 1. Account Creation in Stripe.com Firstly, you need to open an account in Stripe.com. Once you have the account in place, you are all set to add your products and sell them from our landing pages. 2. Adding products The products will be added within your account on Stripe.com. Click on the Products link in the left panel in the Dashboard and click the Add Product button to upload your product. You will be asked to include information about the product like the Title, Description, Pricing, Pictures etc Note - Currently Swipe Pages only supports the checkout of One-time priced products. (Recurring payments are not supported) 3. Customizing design of the Product checkout page Click on the Settings link in the left panel in the Dashboard and click Branding to customize the styling of the checkout page. You will find the option to upload a logo and change the brand color and accent color in the Checkout page and Customer portal. 4. Integrate Swipe Pages with Stripe In your Swipe Pages dashboard, click on the Ecommerce link in the left panel. Click on Connect with Stripe on this page to connect with your Stripe account. 5. Create Checkouts in Landing page You are just one step away from enabling product checkouts. - Edit your landing page in Swipe Pages and go to Checkouts Panel in the Global settings - Click the button to add a new Checkout - Choose the product for which you want to create a checkout from the page. You can add checkouts for more than one product as well. The Success URL will be typically linked to a thank you page that the user will be redirected to after completing checkout. The Cancel/Back Url is the page where the user needs to be taken if the transaction is canceled or fails. - The last step is to link the checkout to a button on the page. Select the button and choose the Action type as Checkout and choose the corresponding product. 6. Purchase history and Analytics The order history and analytics can be found in the Ecommerce tab in the dashboard.
https://docs.swipepages.com/article/102-create-product-checkouts
2021-07-24T01:17:09
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f84248cc9e77c0016216c83/file-3k3CI4PeNr.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f84238546e0fb001798bf50/file-GOgSAUG7rF.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f8424aac9e77c0016216c84/file-WmWmfMD66l.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f83c3e246e0fb001798bdea/file-J86WPBHobH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f83c3fa52faff0016aeed3a/file-UmVL71SgiS.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f8424fe52faff0016aeee89/file-jRsWMvscdN.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f8428e8c9e77c0016216c9e/file-TWYXO3hnEG.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f854fb752faff0016aef3b0/file-bRVddy1aIQ.png', None], dtype=object) ]
docs.swipepages.com
Rich Snippets not showing in search results If the rich snippets are not showing in the search results, the problem might occur due to the snippets you are using. You can validate your scheme using Google rich schema validator here: Visit the above link, and enter the schema, and click on validate. Also, you must keep in mind that it is completely up to Google, to index the schema or not. So, even if the schema is valid, may choose not to show the content, maybe for various reasons.
https://docs.themeisle.com/article/756-rich-snippets-not-showing-in-search-results
2021-07-24T00:33:42
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a146028042863319924a559/file-P8zFjdwgOf.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a146122042863319924a562/file-jaTHZT25kI.png', None], dtype=object) ]
docs.themeisle.com
cfme.utils.ftp module¶ FTP manipulation library @author: Milan Falešník <[email protected]> - class cfme.utils.ftp. FTPClient(host, login, password, upload_dir='/'). __exit__(type, value, traceback)[source]¶ Exiting the context means just calling .close() on the client. retrbinary(f, callback)[source]¶ Download file You need to specify the callback function, which accepts one parameter (data), to be processed.). - exception cfme.utils.ftp. FTPException[source]¶ Bases: exceptions.Exception - class cfme.utils.ftp. FTPFile(client, name, parent_dir, time).
https://cfme-tests.readthedocs.io/en/17.14.0/modules/cfme/cfme.utils.ftp.html
2021-07-24T02:15:56
CC-MAIN-2021-31
1627046150067.87
[]
cfme-tests.readthedocs.io
Secrets are an important part of Naas, when you need to interact with other services, you need secret, like any other variable the temptation is big to put it straight in your notebook, but this lead to a big security breach since we replicate a lot the notebook, in the versioning system, the output and your ability to share it or send it to git! Use this simple feature instead to have global secure storage share with your sandbox and production. Secrets are local to your machine and encoded, that a big layer of security with a little effort. Add a new secret to your Naas naas.secret.add(name="API_NAME", secret="API_KEY") After running it, your data is safe and secure you can delete this line from your notebook. To edit a secret, use the function above with the same name and change the secret parameters. Returns your secret store in Naas naas.secret.get(name="MY_API_KEY") You don't remember your secret? naas.secret.list() You can remove any scheduler capability like that, it takes optionally a path. naas.secret.delete() Need to understand why something goes bad? naas.secret.add("test.csv", debug=True)# ornaas.secret.delete("test.csv", debug=True)
https://docs.naas.ai/features/secret
2021-07-24T01:19:32
CC-MAIN-2021-31
1627046150067.87
[]
docs.naas.ai
Release Notes/099/2019.10000 Experimental Contents - 1 Build 2019.13330 - Apr 09, 2019 - 2 Build 2019.12330 - Mar, 12 2019 - 3 Build 2019.11370 - Feb, 11 2019 - 4 Build 2019.10700 - Jan 21, 2019 - 5 Build 2019.10280 - Jan 10, 2019 - 6 Build 2018.42312 - Dec 12, 2018 - 7 Build 2018.42310 - Nov 29, 2018 - 8 Build 2018.41570 - Nov 14, 2018 Build 2019.13330 - Apr 09, 2019[edit] New Features[edit] - RealSense TOP - Added support for RealSense D435i. - Nvidia Flow Emitter COMP - Improvements - Added 'Shape Threshold' parameter to control how the emitter is created from the TOP's pixels. - Much faster performance when TOP specified for 'Shape TOP' is animated. - Can now select which channels from the 'Shape TOP' pixels are used to create the emitter source. - Nvidia Flow TOP - Improvements - Added support for simulation speed control. - Added support for Camera COMP's Background Color to show up in Nvidia Flow TOP output. -. - Timer CHOP's Cue and goTo() method now supported when 'Segment Method' is set to Parallel Timers. - OSC In CHOP / OSC In DAT / UDP In DAT / EtherDream DAT - Added a local address parameter to only listen on a specific IP. Great for working with multiple network interface controllers (NICs). - Custom Operators - Adding a /Plugins directory next to the .toe file that will also be searched for custom operators. This allows projects to move more easily between machines that do not have the custom OPs installed, and helps manage project-based custom OPs more easily for deployment. -(). - Line MAT - Now works with an orthographic camera, also fixed issue with point size issue in orthographic cameras. - Private .toe file access expanded. - External Component inside private .toe can now be saved without the component becoming private. - Toe file can be saved by python script (privacy maintained). - Unicode character keyboard input for complex languages is now working on macOS. New Palette[edit] - Widgets - Widget kit update. - Improved control over menu drop menu item height and look. - Radio button layout improvements including 'Grid' children layout mode. - Field components now layout label and fields separately. - Rocker widget no longer errors when you remove text for the label. - Label widget now has background image parameter. - Palette:kantanMapper - Improvements - Fixed an issue where rows and columns could not be added in the Grid Warp mode. - User interface is now relative to the root level resolving issues like cut off menus and more. - Palette:moviePlayer - Improvements - Reduced overall CPU usage and CPU time to load new movie. More robust. - Added option to enable/disable roller wheel zoom and mouse left/middle click/drag pan/zoom controls. - Added 'Go Back to Previous Movie' pulse button parameter. - movieEngine - Click/drag on image to scrub. Shift click/drag to scrub and pause. Middle click to pause/play. Bug Fixes and Improvements. - CHOP Execute DAT - Will now execute earlier in the frame, trying to do their operations before other nodes that rely on their scripts cook. - Copy SOP - Fixed excessive cooking that could sometimes occur when fetching stamp parameters. - Point SOP - Fixed crash that can occur if the 2nd input doesn't have the same attributes as the first input and is referenced using one of the pointSOP Class members. - Line MAT - Fixed the Vector:Attribute transformation. - Panel Alignment 'Max Per Line' now takes into consideration the widths and heights of all the children to work better with Fill Mode. - Parameter help now shown for disabled parameters (alt+rollover). - Pasting a parameter as 'Expression Reference' now always uses relative paths again, it will not insert Parent Shortcuts anymore. - Initialize CUDA for any C++/Custom OP, just in case it ends up using it internally. - Default assets added to FBX COMP and USD COMP. - Fixed issue with Select COMPs causing unncessary cooking. - various hangs and crashes. - Faster file saving when saving with Dongle Privacy. Build 2019.12330 - Mar, 12 2019[edit] New Features[edit] -. - FBX COMP / USD COMP - Added 'Reload File' pulse that simply reloads any assets from the file (making no changes to the network inside the COMP). -[edit] - Widgets - Bugs fixed and some general improvements. - Added UI/Basic Widgets/Tools/autoUI.tox - Presets Component Updates and Fixes -[edit] -[edit] -. - Text TOP - Some tweaks to line breaking behavior for word-wrap. - Field COMP - Added parameters to Field COMP for total digits, decimal digits and trailing zeroes to handle float and integer fields. - Line MAT - Fixed points transformations in SOP space. - Line MAT - Lift Direction parameter option "Along Camera Z Axis" changed to "Toward Camera". The behavior of this option has also changed slighty. -. - Bound parameters now respect their local ranges, even if bound to an out-of-range master value. -. Build 2019.11370 - Feb, 11 2019[edit] New Features[edit] -. - Text SOP - New 'Closed Polygons (Filled Holes)' output type. Particularly useful for use with the Line MAT. - USD COMP / FBX COMP - Added a 'Callbacks DAT' parameter and callbacks DAT with an onImport function. This allows for post-processing the imported geometry network immediately after import. - Offline documentation now included with TouchDesigner installation. When without an internet connection, TouchDesigner will automatically use offline help. New Python[edit] - DAT Class.replaceRow - Replace a row in a table. - DAT Class.replaceCol - Replace a column in a table. - OP Class.copyParameters - Added custom/builtin keywords that can disable copying of custom or builtin parameters. New Palette[edit] Widgets - Bug fixes and improvements. Bug Fixes and Improvements[edit] - NDI updated to 3.8 -. - Delete CHOP - Added 'Output Unique Samples' parameter to output only unique samples within a certain tolerance. - CHOP Export - Improvements made to avoid conflicts when copy and pasting. - Binding - Pulse parameters can now be directly linked through Binding. - Fixed the Customize Component dialog's bind menu when dragging a parameter from an operator into dialog. No longer binds as reference for both options. - Fixed case where first ASIO device on Windows 10 systems were being skipped. - Fixed intermittent crash when launching external editor on DAT. - Fixed wireframe shader not working on macOS in the Geometry Viewer when something is selected. - reported crashes. Build 2019.10700 - Jan 21, 2019[edit] New Features[edit] -[edit] - OP.localCook moved to COMP.localCook, always True for OPs internally. New Palette[edit] Tree Browser component was added to the palette. Bug Fixes and Improvements[edit] - Movie File Out TOP - Fixed H264 and H265 movies saving out an extra frame. - OP Find DAT - Fixed overcooking when target OP select state changed. - Substitute DAT - Fixed issues when using wildcard or unicode in this DAT's parameters. - XML DAT working again. - Beat CHOP no longer advances when timeline is paused (but 'power' is on and still cooking). -[edit] - BACKWARDS COMPATIBILITY WARNING - Par.cloneImmune renamed to Par.styleCloneImmune, update any scripts that may have relied on this. Build 2019.10280 - Jan 10, 2019[edit] New Features. - TOPs - Added "Parent Panel Size" entry to Output Resolution menu parameter; this will set the resolution to the width/height of the Parent Panel. If it is not the immediate parent, it will look up the hierarchy and use the width/height of the first parent panel found. - Particle SOP - Added new parameter 'Surface Attraction' to control the surface attraction force. New Python[edit] -[edit] - Movie File Out TOP - Fixed 'Max Threads' not working correctly when trying to change it after a recording has been done. - Text SOP - Added support for outputting open polygons. - Text SOP - Added support for extruding on macOS. - Text TOP / Text SOP - Change 'Language' to 'Break Language' and make it a menu. - Text TOP - Fixed crash when referring to an outside Field Component. -'. - GLSL MAT - Added instanceID overloads for TDDeform*() functions. - GLSL MAT - Fixed crash that occurs when using 3D textures as a map. - Copy SOP - Reorganized 'Attribute' page with popup menus. - OP Find DAT - Fixed very long incremental searches when starting from root. - CPlusPlus TOP / CPlusPlus CHOP / CPlusPlus SOP / CPlusPlus DAT - cookCount added as available data to the OP_*Input classes. - DAT - Fixed DATs to allow inserting or appending its own columns/rows. - Text DAT - Fixed cases where extra random characters were added to DATs when saving from an external editor. -[edit] -. Build 2018.42312 - Dec 12, 2018[edit] Hotfix for 42310 - Note this is a branch build witha few important fixes. Bug Fixes and Improvements[edit] - Work around new macOS 10.14.2 bug. - Texture 3D TOP - Fixed a crash that occurs when using R, RG or RGB pixel formats. Build 2018.42310 - Nov 29, 2018[edit] New Features[edit] -. - Text SOP - Added ability to extrude generated geometry. Windows only this build, next build will have macOS functioning. -. -[edit] -[edit] - Par.exportSource - Returns the object exporting to this parameter. For example Cell, Channel, or None. Bug Fixes and Improvements[edit] -. - Copy SOP - Fixed error that occurs in some cases when trying to stamp. - Text SOP - Behavior on macOS is more robust now. - Text TOP - Added support for scalable fonts and mipmap texture fonts on macOS. -. - List COMP - Fixed crash in initialize function. - Geometry COMP - Fixed crash that can occur when turning off Instancing. - Fixed common crash that occurs when changing SOPs and navigating around networks. - Fixed opening/closing/duplicating pane crash. Backwards Compatibility[edit] - BACKWARDS COMPABITILITY WARNING - GLSL TOP, GLSL MAT - GLSL 1.20 has be deprecated on Windows (never been supported on macOS). Please upgrade GLSL shaders to 3.30 or later for compatibility with future release branches. - BACKWARDS COMPABITILITY WARNING - $TOUCH_START_COMMAND has been removed. - BACKWARDS COMPABITILITY WARNING - DATs named /start and /stream/start will not longer execute on file start. Use the Execute DAT instead. Build 2018.41570 - Nov 14, 2018[edit] Release Highlights[edit] Unicode[edit] -. Physics[edit] - Physics - A new group of Dynamics Components now support hard-body physics simulations using Bullet Physics Library. -[edit] -. - Import Select TOP, Import Select CHOP, Import Select SOP - These new operators work inside FBX COMPs and USD COMPs to extract textures, animation channels, or geoemtry meshes from the imported FBX/USD hierarchy into individual OPs. Custom Operators[edit] - Custom OPs - Now C++ OPs you create have a new API allowing them to act like regular built-in operators. This includes giving them their own custom names and having them available in the OP Create Dialog. - C++ DAT - The DAT family finally gets a C++ operator to create DAT Custom OPs as well. - CPlusPlus OPs and Custom OPs now work with Non-Commercial licenses, they no longer require a Commercial or Pro license. New Operators[edit] - Line MAT - New MAT that provides constant shading of lines with 3D depth rolloff and color controls. Controls for line widths, line end points, drawing points, drawing vectors and arrows, and more are included. - Audio NDI CHOP - New CHOP to receive audio over NDI. New parameter for NDI Out TOP to select audio CHOP source. - NDI DAT - New DAT to list all NDI sources found, also has callbacks to trigger scripts on events. - ZED TOP / ZED CHOP / ZED SOP - Adds support for Stereolabs ZED cameras. - Lookup DAT - New DAT to lookup values between and DAT and lookup table. - Parameter DAT - New DAT for getting parameter information from any OP. Especially useful for parameters whose values are strings. - Process COMP - Work in progress, stay tuned for updates. - Widget COMP - New Panel COMP that will be a base for the upcoming Widget UI system. Work in progress, stay tuned for updates. New Features[edit] TOPs & Rendering[edit] - using the failsafe on AJA Corvid 24 devices. - Text TOP - Added parameters to choose between text alignment using font metrics or the bounding box of the current string. -. - Phong MAT, PBR MAT - Added 'Displace Vertices' parameter that will displace vertex positions using a heightmap. CHOPs & DATs[edit] - Speed CHOP / Spring CHOP / Slope CHOP - Added 'Per Sample' option, Shuffle CHOP not longer needed! - SOP to CHOP - Added toggles to SOP to CHOP for Position, Color, Normal and Texture attributes. Attributes in 'Custom' have autonamed channels. - OSC Out CHOP OSC Out DAT TCP/IP DAT UDP Out DAT - New parameter to support multiple NICs (Network Interface Controllers). - OSC In CHOP - Added a non-timeslice mode that will only cook when receiving data. - Pattern CHOP - Added a new type called 'Step' and new 'Step per Cycle' and 'Phase Step per Channel' parameters. - Timer CHOP - Rewritten for optimization. -. API & SDK Updates[edit] - Updated to CUDA 9.2 - Updated to ffmpeg 4.0 - Updated to AJA SDK 14.2.0.9 - Updated ZED SDK to 2.5.1 - Updated to OpenEXR 2.3.0 for Movie File In TOP. - Updated to Alembic 1.7.9 for Alembic SOP. - Update FBX to 2019.0 for FBX COMP. - Added v4.0.0 Orion support to Leap Motion CHOP / Leap Motion TOP. - Upgraded RealSense TOP / RealSense CHOP Windows API version to 11.0.27.1384 (2016 R3). This removes support for the R200 when using this API, but adds support for Hand Cursor as well as stability improvements. New Python[edit] -. - Wave CHOP - Added support for .chanIndex and .sampleIndex - DAT Class - Added findCells and findCell python methods to search cells by value. -. - OpviewerCOMP_Class - opviewerCOMP.isViewable(path) - Tests for potential recursion to find out if an OP is acceptable for viewing. -'] Improvements and Bug Fixes[edit] - Render TOP / Render Pass TOP - Re-organize parameters. - Render TOP - Added some warnings if alpha != 1.0 is detected but blending isn't enabled on the material. - Movie File In TOP - Improved H.265 decoding performance. - Movie File In TOP - Fixed crash that can occur when failing to open a .exr file. - Movie File In TOP - Removed global CPU movie cache feature, it was generally not useful. - Video Stream Out TOP - Applied hotfix to fix stack overflow vulnerability recently found in live555 library. - Text TOP - Adjust font spacing and layout to be more Unicode friendly. - Text TOP - Fixed some vertical font alignment issues). - DAT to CHOP - Add default value parameter when values not specified in DAT. - OSC In CHOP - Fixed delayed by one when non-queued and intermittent values sent. - DMX In CHOP / DMX Out CHOP - Removed restrictions from universes (prev. 0-15) to make it easer to work with external programs that only use a universe number. - Audio Stream Out CHOP - Fixed issue which caused audio loss after 35 minutes. - SOP to CHOP / CHOP to SOP - Removed the 'Animated' method fom CHOP to SOP and SOP to CHOP. - Expression CHOP - Added me.inputVal, me.chanIndex, me.sampleIndex to the Optimized Expression engine. - Noise CHOP - Added 'Offset' parameter to offset noise. - Noise SOP - Added 'Offset' parameter to offset noise. - Point SOP - Added support for many commonly used Python terms to the Optimized Expression engine for this SOP. - Render Pick DAT - Allow Render Pick DAT to work without a 'select' column. - OP Find DAT improvements - Fixed output for non-default-only, when other fields are default - Fixed 'Combine Filters = Any' logic. - List COMP improvements and additions. - Added new attribute rowStretch. Allows for vertically stretchable rows. - Now updates panel values dragroll/u/v. - Now supports dropping of parameters, text, channels - Perform Window now unconstrained if pointing to Panel COMP set to unconstrained. - Added Optimized Expression support for panelCOMP.panel.<value>.val - CrashAutoSave.toe is now suffixed with the project's filename. Backwards Compatibility[edit] - BACKWARDS COMPATIBILITY WARNING - 'Use Startup File' .toe file preferences need to be reset. - has replaced the binding/size parameters on the Atomic Counters page with a uniform name parameter. Known Issues[edit] - On macOS, for some languages such as Japanese, unicode character input dialog is not enabled yet. You can copy & paste unicode characters in at this time. Fix in progress. - TouchPlayer may not be able to load movies from a unicode path.. TOuch Environment file, the file type used by TouchDesigner to save your project. manipulates text strings: multi-line text or tables. Multi-line text is often a command Script, but can be any multi-line text. Tables are rows and columns of cells, each containing a text string. operating system's holder of files and other folders (directories). It does not refer to operators within TouchDesigner. See Network Path. TouchDesigner User Interface Kit - A collection of User Interface components.. Binding is a Parameter Mode that ties two or more parameters together, where changing the value of any one of the bound parameters changes value of all the bound parameters. The actual value is stored in one of bound parameters, called the bind master parameter. A 3D viewport for viewing and manipulating 3D scenes or objects interactively. A geometry viewer can be found in Panes (alt+3 in any pane) or the Node Viewers of all Geometry Object components.. OP Snippets is a set of numerous examples of TouchDesigner operators, which you access via the Help menu. These can be copied/pasted into your projects. Quad Reprojection renders pixel-perfect perspective-correct images for flat TVs and LED panels hung at any orientation. An Operator Family that associates a shader with a SOP or Geometry Object for rendering textured and lit objects. Information associated with SOP geometry. Points and primitives (polygons, NURBS, etc.) can have any number of attributes - position (P) is standard, and optional are normals (N), texture coordinates (uv), color (Cd), etc. A 3D image created with the Render TOP. Also used more generally to include the compositing (with TOPs) to generate an output image. The component types that are used to render 3D scenes: Geometry Component contain the 3D shapes to render, plus Camera, Light, Ambient Light, Null, Bone, Handle and other component types..
https://docs.derivative.ca/index.php?title=Release_Notes/099/2019.10000_Experimental&oldid=15844
2020-02-17T08:07:30
CC-MAIN-2020-10
1581875141749.3
[]
docs.derivative.ca
Welcome to Bread. Integrate with our supported platforms If you happen to be using one these popular e-commerce platforms, then we’ve got you covered. Learn more about Bread's capabilities. Customize Styles Easily customize the Bread button to match your brand's look and feel, so your customer experience is consistent. Marketing Bread Drive more conversions by leading your customers to apply for financing at the right times. Merchant Operations Learn more about how to manage your day-to-day operations with Bread Video Library Check out our tutorial library for videos that walk-through our frequently asked questions.
https://docs.getbread.com/
2020-02-17T06:42:56
CC-MAIN-2020-10
1581875141749.3
[]
docs.getbread.com
All content with label as5+async+cachestore+gridfs+import+infinispan+interface+jsr-107+loader+lock_striping+lucene+non-blocking+notification+write_through+xsd. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, test, jcache, api, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, custom_interceptor, setup, clustering, mongodb, eviction, out_of_memory, concurrency, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, cloud, mvcc, tutorial, jbosscache3x, read_committed, xml, distribution, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, permission, websocket, transaction, interactive, xaresource, searchable, demo, scala, installation, » ( - as5, - async, - cachestore, - gridfs, - import, - infinispan, - interface, - jsr-107, - loader, - lock_striping, - lucene, - non-blocking, - notification, - write_through, - xsd ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+async+cachestore+gridfs+import+infinispan+interface+jsr-107+loader+lock_striping+lucene+non-blocking+notification+write_through+xsd
2020-02-17T06:13:10
CC-MAIN-2020-10
1581875141749.3
[]
docs.jboss.org
Data Origin Authentication Data Origin Authentication Web Service Security: Scenarios, Patterns, and Implementation Guidance for Web Services Enhancements (WSE) 3.0 Microsoft Corporation patterns & practices Developer Center Web Service Security: Home December 2005 Contents Context Problem Forces Solution Resulting Context Related Patterns More Information Context. Problem How do you prevent an attacker from manipulating messages in transit between a client and a Web service? Forces Any of the following conditions justifies using the solution described in this pattern: - An altered message can cause the message recipient to behave in an unintended and undesired way. The message recipient should verify that the incoming message has not been tampered with. - An attacker could pose as a legitimate sender and send falsified messages. The message recipient should verify that incoming messages originated from a legitimate sender. The following condition is an additional reason to use the solution: - data origin authentication, which enables the recipient to verify that messages have not been tampered with in transit (data integrity) and that they originate from the expected sender (authenticity). In cases where the client denies having performed the action (nonrepudiation), you can use digital signatures to provide evidence that a client has performed a particular action that is related to data. Digital signatures can be used for nonrepudiation purposes, but they may not be sufficient to provide legal proof of nonrepudiation. By itself, a digital signature is just a mechanism to capture a client's association to data. In cases where data has been digitally signed, the degree to which an individual or organization can be held accountable is established in an agreement between the party that requires digital signatures and the owner of the digital signature. Security Concepts Proof-of-possession is a value that a client presents to demonstrate knowledge of either a shared secret or a private key to support client authentication. Proof-of-possession using a shared secret can be established using the actual shared secret, such as a user's password, or a password equivalent, such as a digest of the shared secret, which is typically created with a hash of the shared secret and a salt value. Proof-of-possession can also be established using the XML signature within a SOAP message where the XML signature is generated symmetrically based on the shared secret, or asymmetrically based on the sender's private key. Participants Data origin authentication involves the following participants: - Sender. The sender is the originator of a message. A client can send a request message to a Web service, and a Web service can send a response message back to the client that has sent the request message. - Recipient. The recipient is the entity that receives a message from the sender. A Web service is the recipient of a request message sent by a client. A client is the recipient of a response message that it receives from a Web service. Process Two types of signatures can be used to sign a message: symmetric and asymmetric. Note The following discussion refers to both XML signatures and digital signatures. XML signatures are used for SOAP message security with either a symmetric algorithm or an asymmetric algorithm. Digital signatures are created explicitly with an asymmetric algorithm and may or may not be used for SOAP message security. Symmetric Signatures A symmetric signature is created by using a shared secret to sign and verify the message. A symmetric signature is commonly known as a Message Authentication Code (MAC). A MAC is created by computing a checksum with the message content and the shared secret. A MAC can be verified only by a party that has both the shared secret and the original message content that was used to create the MAC. The most common type of MAC is a Hashed Message Authentication Code (HMAC). The HMAC protocol uses a shared secret and a hashing algorithm (such as MD5, SHA-1, or SHA-256) to create the signature, which is added to the message. The message recipient uses the shared secret and the message content to verify the signature by recreating the HMAC and comparing it to the HMAC that was sent in the message. If security is your primary consideration for choosing a hashing algorithm for an HMAC, you should use SHA-256 where possible for the hashing algorithm to create an HMAC. This is because it is the least likely algorithm to produce collisions (when two different pieces of data produce the same hash value). MD5 provides a high-performance method for creating checksums, though it is not a good choice for use as an HMAC because it can be compromised by brute force attack in a relatively short period of time. SHA-1 is currently the most widely adopted algorithm, so it may be required for interoperability reasons. Because of recent advances in cryptographic attacks against SHA-1, there is movement toward adopting more secure hash algorithms, such as SHA-256, as the recommended standard. To protect a signature from offline cryptanalysis—especially those created with an older hash algorithm such as MD5 or SHA1—the hash value should be encrypted as sensitive data. The shared key and algorithm that are used to encrypt the hash may depend on the symmetric algorithm used to encrypt sensitive data. (For more information, see Data Confidentiality in Chapter 2, "Message Protection Patterns.") When it is used to create an HMAC, the names of these algorithms are preceded by the term "HMAC" (for example, HMAC SHA-1 or HMAC MD5). Figure 1 illustrates the process of using a MAC to sign a message. Figure 1. Signing a message using a symmetric signature As illustrated in Figure 1, signing a message using a symmetric signature involves the following steps: - The sender creates a MAC using a shared secret key and attaches it to the message. - The sender sends the message and MAC to the recipient. - The recipient verifies that the MAC that was sent with the message by using the same shared secret key that was used to create the MAC. By signing with a shared secret, both data integrity and data origin authenticity are provided for the signed message content. However, symmetric signatures are not usually used to provide nonrepudiation because shared secrets are known by multiple parties. This makes it more difficult to prove that a specific party used the shared secret to sign the message. Asymmetric Signatures An asymmetric signature is processed with two different keys; one key is used to create the signature and the other key is used to verify the signature. The two keys are related to one another and are commonly referred to as a public/private key pair. The public key is generally available and can be distributed with the message; the private key is kept secret by the owner and is never sent in a message. A signature that is created and verified with an asymmetric public/private key pair is referred to as a digital signature. Figure 2 illustrates the process of using asymmetric keys to sign a message. Figure 2. Signing a message with an asymmetric signature As illustrated in Figure 2, signing a message with an asymmetric signature involves the following steps: - The sender signs the message content using the sender's private key and attaches it to the message. - The sender sends the message and digital signature to the recipient. - The recipient verifies the digital signature using the sender's public key that corresponds to the private key that was used to sign the message. The algorithm that is most commonly used to create a digital signature is the Digital Signature Algorithm (DSA). DSA uses the public/private key pairs created for use with the RSA algorithm to create and verify signatures. For more information, see Data Confidentiality in Chapter 2, "Message Protection Patterns." For both signing and encryption purposes, asymmetric keys are often managed through a Public Key Infrastructure (PKI). Information that describes the client is bound to its public key through endorsement from a trusted party to form a certificate. Certificates allow a message recipient to verify the private key in a client's signature using the public key in the client's certificate. For more information about X.509, see X.509 Technical Supplement in Chapter 7, "Technical Supplements." Typically, digital signatures are used to support requirements for nonrepudiation. This is because access to the private key is usually restricted to the owner of the key, which makes it easier to verify proof-of-ownership. Asymmetric signatures require more processing resources than symmetric signatures. For this reason, asymmetric signatures are usually optimized by hashing the message content and then asymmetrically signing the hash. This reduces the size of the data that the asymmetric operation is applied to. In cases where more than one message is exchanged, it is also possible to first exchange a high-entropy shared secret that is encrypted asymmetrically. Based on the shared secret, additional message exchanges are secured symmetrically. Key derivation techniques are often used to add variability to shared secrets that are used over multiple message exchanges. For an example of this case, see "Extension 1–Establishing a Secure Conversation" in Brokered Authentication: Security Token Service (STS) in Chapter 1, "Authentication Patterns." It is important to remember that this type of optimization can remove the ability of asymmetric signatures to isolate which of the two parties signed a message. Example When using message layer authentication, it is often necessary to include Data Origin authentication as part of the authentication process. One example of this is the use of X.509 certificates to perform message layer authentication. X.509 is based on public key cryptography, so the type of data origin authentication that is used is an asymmetric signature. For example, a business customer at a bank may sign payroll transfers using his or her certificate private key. The bank can then verify that the payroll transfer request came from the correct business customer and that the message had not been tampered with in transit between the business customer and the bank. Resulting Context This section describes some of the more significant benefits, liabilities, and security considerations of using this pattern. **Note **The information in this section is not intended to be comprehensive. However, it does discuss many of the issues that are most commonly encountered for this pattern. Benefits The Data Origin Authentication pattern makes it possible for the recipient to detect whether a message has been tampered with. Also, the origin of the message can be traced to an identifiable source. Liabilities The liabilities associated with the Data Origin Authentication pattern include the following: - Cryptographic operations, such as data signing and verification, are computationally intensive processes that impact system resource usage. This affects the scalability and performance of the application. - Key management, which is responsible for maintaining the integrity of keys, can have a significant administrative overhead. Factors that affect the administrative complexity of key management include: - The number and type of keys used. - The type of cryptography used (symmetric or asymmetric). - The key management infrastructure in use. Security Considerations Security considerations associated with the Data Origin Authentication pattern include the following: - If a message is being signed, you should ensure that the signature within the message is encrypted. In many cases, a signature that is not encrypted can be the target of a cryptographic attack. - If too much data is encrypted with the same symmetric key, an attacker can intercept several messages and attempt to cryptographically attack the encrypted messages, with the goal of obtaining the symmetric key. To minimize the risk of this type of attack, you should consider generating session-based encryption keys that have a relatively short life span. Typically, these session keys are derived from a master symmetric key such as a shared identity secret. Usually, the session key is exchanged using asymmetric encryption during the initial interaction between a sender and recipient. Session keys should be discarded and replaced at regular intervals, based on the amount of data or the number of messages that they are used to encrypt. - Much of the strength of symmetric encryption algorithms comes from the randomness of their encryption keys. If keys originate from a source that is not sufficiently random, attackers may narrow down the number of possible values for the encryption key. This makes it possible for a brute force attack to discover the key value of encrypted messages that the attacker has intercepted. For example, a user password that is used as an encryption key can be very easy to attack because user passwords are typically a non-random value of relatively small size that a user can remember it without writing it somewhere. - You should use published, well-known encryption algorithms that have withstood years of rigorous attacks and scrutiny. Use of encryption algorithms that have not been subjected to rigorous review by trained cryptologists may contain undiscovered flaws that are easily exploited by an attacker. Related Patterns The following child patterns are related to the Data Origin Authentication pattern: - Implementing Direct Authentication with UsernameToken in WSE 3.0. This pattern focuses on using direct authentication to verify message signatures at the message layer in WSE 3.0. - Implementing Message Layer Security with Kerberos in WSE 3.0. This pattern provides guidelines for implementing brokered authentication, authorization, data integrity, and data origin authentication with the Kerberos version 5 protocol in WSE 3.0. More Information For more information about HMAC, see RFC 2104 - HMAC: Keyed Hashing for Message Authentication. For more information about WS-Security version 1.0, see the OASIS Standards and Other Approved Work (including WS-Security) on the OASIS Web site. For more information about threats and countermeasures, see the following: - Security Challenges, Threats and Countermeasures Version 1.0 on the WS-I Web site. - Chapter 2, "Threats and Countermeasures," of Improving Web Application Security: Threats and Countermeasures on MSDN.
https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff648434(v=pandp.10)?redirectedfrom=MSDN
2020-02-17T07:56:30
CC-MAIN-2020-10
1581875141749.3
[array(['images/ff648434.ch2_doa_f01%28en-us%2cpandp.10%29.gif', 'Ff648434.ch2_doa_f01(en-us,PandP.10).gif Ff648434.ch2_doa_f01(en-us,PandP.10).gif'], dtype=object) array(['images/ff648434.ch2_doa_f02%28en-us%2cpandp.10%29.gif', 'Ff648434.ch2_doa_f02(en-us,PandP.10).gif Ff648434.ch2_doa_f02(en-us,PandP.10).gif'], dtype=object) ]
docs.microsoft.com
Mailbox Server Plugin APIs for mailbox server. More... Detailed Description APIs for mailbox mailbox-server.h for source code. Function Documentation Adds a message to the mailbox queue. The message is stored in the internal queue until the destination node queries the mailbox server node for messages or upon timeout. - Parameters - - Returns - An EmberAfMailboxStatus value of: - EMBER_MAILBOX_STATUS_SUCCESS if the message was successfully added to the packet queue. - EMBER_MAILBOX_STATUS_INVALID_CALL if the passed message is invalid. - EMBER_MAILBOX_STATUS_INVALID_ADDRESS if the passed destination address is invalid. - EMBER_MAILBOX_STATUS_MESSAGE_TOO_LONG if the payload size of the passed message exceeds the maximum allowable payload for the passed transmission options. - EMBER_MAILBOX_STATUS_MESSAGE_TABLE_FULL if the packet table is already full. - EMBER_MAILBOX_STATUS_MESSAGE_NO_BUFFERS if not enough memory buffers are available for storing the message content. Mailbox Server Message Delivered Callback. This callback is invoked at the server when a message submitted locally by the server was successfully delivered or when it timed-out. - Parameters -
https://docs.silabs.com/connect-stack/2.5/group-mailbox-server
2020-02-17T08:11:10
CC-MAIN-2020-10
1581875141749.3
[]
docs.silabs.com
-projectors config file. You can also add them to the Projectionist. This can be done anywhere, but typically you would do this in a ServiceProvider of your own. namespace App\Providers; use App\Projectors\AccountBalanceProjector; use Illuminate\Support\ServiceProvider; use Spatie\EventProjector\Facades\Projectionist; class EventProjectorServiceProvider extends ServiceProvider { public function register() { // adding a single\EventProjector\Projectors\Projector; use Spatie\EventProjector\Projectors\ProjectsEvents; class MyProjector implements Projector { use ProjectsEvents; public function onEventHappened(EventHappended $event) { // do some work } } Just by adding a typehint of the event you want to handle makes our package call that method when the typehinted event occurs. All methods specified in your projector can also make use of method injection, so you can resolve any dependencies you need in those methods as well. Getting the uuid of an event In most cases you want to have access to the event that was fired. When using aggregates your events probably won’t contain the uuid associated with that event. To get to the uuid of an event simply add a parameter called $aggregateUuid that typehinted as a string. // ... public function onMoneyAdded(MoneyAdded $event, string $aggregateUuid) { $account = Account::findByUuid($aggregateUuid); $account->balance += $event->amount; $account->save(); } The order of the parameters giving to an event handling method like onMoneyAdded. We’ll simply pass the uuid to any arguments named $aggregateUuid. Manually registering event handling methods The $handlesEvents property is an array which has event class names as keys and method names as values. Whenever an event is fired that matches one of the keys in $handlesEvents the corresponding method will be fired. You can name your methods however you like. Here’s an example where we listen for a MoneyAdded event: namespace App\Projectors; use App\Account; use App\Events\MoneyAdded; use Spatie\EventProjector\Projectors\Projector; use Spatie\EventProjector\Projectors\ProjectsEvents; class AccountBalanceProjector implements Projector { use ProjectsEvents; /* * Here you can specify which event should trigger which method. */ protected // ... protected $handlesEvents = [ MoneyAdded::class => 'onMoneyAdded', ]; You can write this a little shorter. Just put the class name of an event in that array. The package will infer the method name to be called. It will assume that there is a method called on followed by the name of the event. Here’s an example: // in a projector // ... protected ) { }
https://docs.spatie.be/laravel-event-projector/v2/using-projectors/creating-and-configuring-projectors/
2020-02-17T06:55:24
CC-MAIN-2020-10
1581875141749.3
[]
docs.spatie.be
13.4. How to change the domain registered to my license¶ To change your domain, go to License Settings Page and click to Deactivate on this domain button. After deactivation, you are free to use your license key to activate the plugin in another domain. If you cannot reach your old domain, please send a support email (See: How to send a support request) by including your purchase code and that you want to remove the domain that is currently registered to your license. An example email might look like the following: An example email requesting the removal of currently registered domain Hi, I want the domain currently registered to my license key to be removed. My license key is enter your license key (purchase code) here
https://docs.wpcontentcrawler.com/1.9/faqs/how-to-change-the-domain-registered-to-my-license.html
2020-02-17T08:13:48
CC-MAIN-2020-10
1581875141749.3
[]
docs.wpcontentcrawler.com
There is extra link at your top menu bar. Configuration page is contains four main areas. Info - Allowed tokens and resources - Invoice section - Packaging section - Global settings - Static Header/footer You may set your static header or footer sections. Static means sections are fixed to the top/bottom of the page. Static header example Static footer example
http://docs.nop4you.com/configuration-3
2020-02-17T07:15:56
CC-MAIN-2020-10
1581875141749.3
[]
docs.nop4you.com
Can I cancel my own challenge? Yes, you can cancel the challenges that you’ve created anytime. Open the settings of the challenge and scroll down to the End Challenge button. This ends the challenge for everyone who is taking part, but your goal is not affected. You can set up another challenge with this goal anytime.
https://docs.goalifyapp.com/can_i_cancel_my_own_challenge.en.html
2020-02-17T07:48:53
CC-MAIN-2020-10
1581875141749.3
[]
docs.goalifyapp.com
Varbase utilizes Drupal 8's configuration management which made it extremely resilient to manage update paths for configuration changes in Varbase versions. In Varbase, we categorize configuration changes and updates into 4 types: Varbase uses the Update Helper module (a module made by Thunder team) which provides a UI using the Checklist API. This is a good tool as it shows the site admin, developer, or maintainer what new updates are available from inside the site itself. You can navigate to (where my.varbase-site.local is the URL for your website) or go to Administration → Reports → Checklists → Varbase Updates to learn about the new changes and updates introduced in your Varbase site.
https://docs.varbase.vardot.com/updating-varbase/handling-configuration-updates
2020-02-17T06:33:22
CC-MAIN-2020-10
1581875141749.3
[]
docs.varbase.vardot.com
writeTargetingDistance - Write targeting model distances to a file Description¶ writeTargetingDistance writes a 5-mer targeting distance matrix to a tab-delimited file. Usage¶ writeTargetingDistance(model, file) Arguments¶ - model - TargetingModel object with mutation likelihood information. - file - name of file to write. Details¶ The targeting distance write as a tab-delimited 5x3125 matrix. Rows define the mutated nucleotide at the center of each 5-mer, one of c("A", "C", "G", "T", "N"), and columns define the complete 5-mer of the unmutated nucleotide sequence. NA values in the distance matrix are replaced with distance 0. Examples¶ ### Not run: # Write HS5F targeting model to working directory as hs5f.tab # writeTargetingDistance(HH_S5F, "hh_s5f.tsv") See also¶ Takes as input a TargetingModel object and calculates distances using calcTargetingDistance.
https://shazam.readthedocs.io/en/stable/topics/writeTargetingDistance/
2020-02-17T06:38:59
CC-MAIN-2020-10
1581875141749.3
[]
shazam.readthedocs.io
Core Algorithms¶ Read Preprocessing Algorithms¶ In ADAM, we have implemented the three most-commonly used pre-processing stages from the GATK pipeline (DePristo et al. 2011). In this section, we describe the stages that we have implemented, and the techniques we have used to improve performance and accuracy when running on a distributed system. These pre-processing stages include: - Duplicate Removal: During the process of preparing DNA for sequencing, reads are duplicated by errors during the sample preparation and polymerase chain reaction stages. Detection of duplicate reads requires matching all reads by their position and orientation after read alignment. Reads with identical position and orientation are assumed to be duplicates. When a group of duplicate reads is found, each read is scored, and all but the highest quality read are marked as duplicates. We have validated our duplicate removal code against Picard (The Broad Institute of Harvard and MIT 2014), which is used by the GATK for Marking Duplicates. Our implementation is fully concordant with the Picard/GATK duplicate removal engine, except we are able to perform duplicate marking for chimeric read pairs. [2] Specifically, because Picard’s traversal engine is restricted to processing linearly sorted alignments, Picard mishandles these alignments. Since our engine is not constrained by the underlying layout of data on disk, we are able to properly handle chimeric read pairs. - Local Realignment: In local realignment, we correct areas where variant alleles cause reads to be locally misaligned from the reference genome. [3] In this algorithm, we first identify regions as targets for realignment. In the GATK, this identification is done by traversing sorted read alignments. In our implementation, we fold over partitions where we generate targets, and then we merge the tree of targets. This process allows us to eliminate the data shuffle needed to achieve the sorted ordering. As part of this fold, we must compute the convex hull of overlapping regions in parallel. We discuss this in more detail later in this section. After we have generated the targets, we associate reads to the overlapping target, if one exists. After associating reads to realignment targets, we run a heuristic realignment algorithm that works by minimizing the quality-score weighted number of bases that mismatch against the reference. - Base Quality Score Recalibration (BQSR): During the sequencing process, systemic errors occur that lead to the incorrect assignment of base quality scores. In this step, we label each base that we have sequenced with an error covariate. For each covariate, we count the total number of bases that we saw, as well as the total number of bases within the covariate that do not match the reference genome. From this data, we apply a correction by estimating the error probability for each set of covariates under a beta-binomial model with uniform prior. We have validated the concordance of our BQSR implementation against the GATK. Across both tools, only 5000 of the 180B bases (\(<0.0001\%\)) in the high-coverage NA12878 genome dataset differ. After investigating this discrepancy, we have determined that this is due to an error in the GATK, where paired-end reads are mishandled if the two reads in the pair overlap. - ShuffleRegionJoin Load Balancing: Because of the non-uniform distribution of regions in mapped reads, joining two genomic datasets can be difficult or impossible when neither dataset fits completely on a single node. To reduce the impact of data skew on the runtime of joins, we implemented a load balancing engine in ADAM’s ShuffleRegionJoin core. This load balancing is a preprocessing step to the ShuffleRegionJoin and improves performance by 10–100x. The first phase of the load balancer is to sort and repartition the left dataset evenly across all partitions, regardless of the mapped region. This offers significantly better distribution of the data than the standard binning approach. After rebalancing the data, we copartition the right dataset with the left based on the region bounds of each partition. Once the data has been copartitioned, it is sorted locally and the join is performed. In the rest of this section, we discuss the high level implementations of these algorithms.
https://adam.readthedocs.io/en/latest/algorithms/reads/
2020-02-17T07:02:14
CC-MAIN-2020-10
1581875141749.3
[]
adam.readthedocs.io
Visual Studio Release Rhythm | Developer Community | System Requirements | Compatibility | Distributable Code | License Terms | Blogs | Latest Release Known Issues | Microsoft will continue to enhance the capabilities of Visual Studio 2019 with frequent minor version updates. We want these features to add clear value to you. So, we ask for your product suggestions on Developer Community, we publish a Roadmap of upcoming features, and we make a preview of these features available for you to validate and submit feedback on. This doc will explain our release rhythm of previews, releases, and servicing fixes. Visual Studio 2019 can be acquired from one or both of: Preview Channel carries the latest features and provides you with an early peek of what's coming in the next minor version update on the Release Channel. After a few iterations in preview, these features become available on the Release Channel. You can install both Preview and Release Channel versions of Visual Studio 2019 side-by-side on the same machine. At any given point, the Preview Channel will contain the same or newer features and fixes compared to the Release Channel. You will get notified as newer versions are available for install. In the following sections, you can find more details about how and when updates to Visual Studio ship, and how you can preview functionality before it's released to the world. Release Channel Updates Updates to Visual Studio fall into two general categories: - Minor Updates ship roughly every two to three months to the Release Channel, after being available in the Preview Channel. These updates may include new features, bug fixes, and changes to adapt to platform changes (for example, changes in Windows, Azure, Android, or iOS). You can determine which minor update you are running by opening Help > About and reading the second part of the version number, for example 16.1 or 16.2. - Servicing Updates are releases of targeted fixes for critical issues. You're can determine which servicing update you're running by opening Help > About and reading the third part of the version number, for example 16.0.10, 16.1.3. Both minor and servicing updates to the Release Channel are ready to be used in production environments. Noteworthy updates will be announced through the Visual Studio blog. All minor updates are accompanied by release notes. Visual Studio alerts you that a new update is available by the notification icon in the bottom right corner of the IDE. We strongly encourage everyone to adopt updates as soon as possible. However, we do acknowledge that some customers may need to use an older build. For users of the Professional and Enterprise editions, we offer multiple servicing baselines to give administrators and larger development teams more flexibility and control in adopting new releases. Please note that outside of servicing baselines, we do not offer support for these older releases. If an older release is required for your development scenario, we recommend that you create your own offline install cache and store the cache for future use. Preview Channel Updates Previews are meant for those who are eager to try out new Visual Studio features. All features or experiences that are coming online in the next minor update always ship first in a release to the Preview Channel. Even though previews are not intended for use in production, they will be at a sufficient quality level for you to generally use and provide feedback. There are usually multiple previews leading up to the next minor update, and they don't necessarily adhere to any preset schedule. Update Notifications You will receive notification that updates to the Preview and Release channels are available through the notification icon in the IDE and through posts to the Visual Studio blog. The Release Channel release notes and Preview Channel release notes will document the features and fixes available in that release to help you make an informed decision about when to install it. Lastly, we will update all the relevant feedback items on the Developer Community portal to let you know in which version an issue was fixed. Feedback We would love to hear from you! For issues, let us know through the Report a Problem feature in the upper right-hand corner of either the installer or the Visual Studio IDE. You can make a product suggestion or track your issues by using Visual Studio Developer Community, where you can ask questions, find answers, or propose new features. You can also get free installation help through our Live Chat support.
https://docs.microsoft.com/en-us/visualstudio/productinfo/release-rhythm
2020-02-17T06:27:58
CC-MAIN-2020-10
1581875141749.3
[array(['media/vs_branching_diagram.png', 'Visual Studio Releases and Previews'], dtype=object)]
docs.microsoft.com
Our general privacy policy and app-specific privacy policy, which you can find below, apply to File Viewer for Bitbucket Cloud app. Please, review both prior to using the app. This page contains Privacy Policy of File Viewer for Bitbucket Cloud. This Privacy Policy does not apply to File Viewer for Bitbucket Server. File Viewer for Bitbucket Server is hosted on your systems and does not collect any data. Information Gathering and Usage During File Viewer installation, users grant access to the add-on to read repositories. The access is needed to read the content of a file in order to display it on the Bitbucket source view page. No cloning operations are executed. File Viewer uses Google Analytics that allows us to better understand our users and improve the services we offer. To learn about how Google Analytics collects and processes data please read here. For the information about what steps Google Analytics takes to help keep your data protected please read here.The information gathered is non-personally identifiable. StiltSoft will not facilitate the merging of personally-identifiable data with non-personally identifiable data previously collected without notice and the user's consent. Data Storage File Viewer uses third party vendor and hosting partner, Heroku, to provide the necessary hardware, software, networking, storage, and related technology required to run File Viewer. Although StiltSoft company owns the code, databases, and all rights to the File Viewer platform, you retain all rights to your data. Disclosure The information we collect is used to improve the quality of our service, and is not shared with or sold to other organizations for commercial purposes, except to provide products or services you've requested, when we have your permission, or if StiltSoft is acquired by or merged with another company. In this event, StiltSoft will notify you before information about you is transferred and becomes subject to a different privacy policy. Changes StiltSoft company may periodically update this policy. Questions Any questions about this Privacy Policy should be addressed to [email protected].
https://docs.stiltsoft.com/display/public/BBFileViewer/Privacy%20Policy
2020-02-17T08:11:09
CC-MAIN-2020-10
1581875141749.3
[]
docs.stiltsoft.com
Why does my “Settings” area disappear during a demo of Microsoft CRM 3? I have only ever seen this to be an issue on one machine Virtual PC demo environments. But you are in Outlook and you want to show customizations, you open the Web Client and no customization area, and sometimes, no Settings area at all. Why you ask? This is related to a caching issue with the sitemap.xml file, which controls the entire left-hand navigation and is generally common between the web client and the Outlook clients. The one exception is Settings, which is only available in the web client. If you start on the server with Outlook first and then open the web client, you may run in to this problem. If you export the Site Map customizations and look at the file you will see that the Settings area is set to Client = “Web”. This means that the area will only appear in the CRM Web client. This is what is getting cached. When you open up the CRM Outlook Client the Sitemap settings say to not display the Settings button since it is only available in the CRM Web Client. (And please do try and change that, I am not sure what the ramifications are of doing that…) You can fix this in a demo environment pretty easily. -. Thanks to Dana Martens for providing a easy fix for something that has been driving me nuts since the early days of 3.0’s VPCs… This is something that Anne Stanton and others have been talking about for a while. While this is not for use in production, it should work. (I have ONLY ever ran into this in my VPC environments.)
https://docs.microsoft.com/en-us/archive/blogs/midatlanticcrm/why-does-my-settings-area-disappear-during-a-demo-of-microsoft-crm-3
2020-02-17T08:18:27
CC-MAIN-2020-10
1581875141749.3
[array(['http://www.microsoftcrmdemo.com/blog/Settings/ServerURL.jpg', None], dtype=object) array(['http://www.microsoftcrmdemo.com/blog/Settings/BothOpen.jpg', None], dtype=object) ]
docs.microsoft.com
Probably the first thing you'd like to change in the Rover is the Wifi network name and password. It's easy - let's do it. IP: 10.0.0.1 | Login: pi | Password: raspberry You need to enter a .conf file which runs the access point settings. Type: sudo nano /etc/hostapd/hostapd.conf Here you can modify two lines: ssid and wpa_passphrase. Better don't touch the rest. By default it will be: ssid=TurtleRover-[serial-number] or ssid=TurtleRover-XXYYY And wpa_passphrase: wpa_passphrase=password To save the changes - click ctrl+o and ENTER [or RETURN]. After you reboot your Rover Wifi network will be named and secured accordingly to the modifications.
https://docs.turtlerover.com/manuals/wifi-name-and-password-change
2020-02-17T06:41:21
CC-MAIN-2020-10
1581875141749.3
[]
docs.turtlerover.com
Installation¶ If you do not already have Anaconda installed, please download it via the downloads page and install it. IOPro is included with Anaconda Workgroup and Anaconda Enterprise subscriptions. To start a 30-day free trial just download and install the IOPro package. If you already have Anaconda (free Python distribution) installed: conda update conda conda install iopro If you do not have Anaconda installed, you can download it here. IOPro can also be installed into your own (non-Anaconda) Python environment. For more information about IOPro please contact [email protected]. IOPro Update Instructions¶ If you have Anaconda (free Python distribution) installed, first update the conda package management tool to the latest version, then use conda to update the IOPro product installation: conda update conda conda update iopro
https://docs.anaconda.com/iopro/1.8.0/install/
2020-02-17T08:04:23
CC-MAIN-2020-10
1581875141749.3
[]
docs.anaconda.com
Kiten supports more advanced searches than plain normal word searches. Beside a search with Exact Match Kiten has three additional search modes To search for the beginning of a word, instead of pressing the Search button on the toolbar or pressing Return on the text-entry in the toolbar, choose using the → → . Similarly, for ending or anywhere searches, choose → or to search for your text anywhere in or at the end of a word. These search modes work for searches of both languages. Kiten supports word type searches such as: verb, noun, adjective, adverb, prefix, suffix, expression or any type. This way you can filter your results more conveniently.
https://docs.kde.org/stable5/en/kdeedu/kiten/advanced-searches.html
2020-02-17T06:30:53
CC-MAIN-2020-10
1581875141749.3
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['ending_search.png', 'Kiten match ending search'], dtype=object) array(['word_type_results.png', 'Kiten match word type'], dtype=object)]
docs.kde.org
最近更新时间:2020-02-17 14:56:34 This operation will upload a block in the block upload task.. When the total size of all blocks is greater than 5M, except that the last block has no size limit, the size of other blocks shall be more than 5MB. When the total size of all blocks is less than 5M, except for the last block without size limit, the size of other blocks is required to be more than 100k. If the above requirements are not met, 413 status code will be returned. To ensure that the data is not damaged during transmission, please use the Content-MD5 header. When using this header, KS3 will automatically calculate MD5 and verify it according to MD5 provided by the user. If it does not match, an error message will be returned Attention When you start to upload in blocks and start to upload in blocks, you must complete or give up the upload task to terminate the charges caused by storage. PUT /{ObjectKey}?partNumber={PartNumber}&uploadId={UploadId} HTTP/1.1 Host: {BucketName}.{endpoint} Date: {date} Content-Length: {Size} Authorization: {SignatureValue} Attention: - Correspondence between Endpoint and Region - SignatureValue Algorithm The request does not use the request parameter. The interface can use all common request headers. For more information, please click Common Request Headers。 The interface does not use the requested content. This interface can use all common response headers. For more information, please click Common Response Headers。 The interface does not use response content. Sample Request PUT /my-video.rm ?partNumber=2 &uploadId= 1aa9cfad5e2e405c8f27965feb8b60cc HTTP/1.1 Host: ks3-example.ks3-cn-beijing.ksyun.com Date: Mon, 1 Nov 2010 20:34:56 GMT Content-Length: 10485760 Content-MD5: pUNXr/BjKK5G2UKvaRRrOA== Authorization: authorization string part data omitted Sample Response HTTP/1.1 200 OK Date: Mon, 1 Nov 2010 20:34:56 GMT ETag: "b54357faf0632cce46e942fa68356b38" Content-Length: 0 Connection: keep-alive Server: Tengine
https://docs.ksyun.com/documents/27929?preview=1
2020-02-17T06:56:37
CC-MAIN-2020-10
1581875141749.3
[]
docs.ksyun.com
Implement Design Patterns in Orchestrations This section discusses the common patterns of BizTalk Server programming as well as enterprise integration patterns. You can leverage a single pattern or combine multiple patterns to design your business process and then implement the design by using shapes in BizTalk Orchestration Designer. Design Patterns. See Also Designing Orchestration Flow
https://docs.microsoft.com/en-us/biztalk/core/implementing-design-patterns-in-orchestrations?redirectedfrom=MSDN
2020-02-17T08:03:55
CC-MAIN-2020-10
1581875141749.3
[]
docs.microsoft.com
Instruction for upgrading the Onegini Android SDK to version 9.5 Third-party libraries were updated A couple of third-party libraries were updated, if you provide the SDK as an aar archive please update the dependencies in your project: OkHttp library was updated to the version 3.12.1: com.squareup.okhttp3:okhttp:3.12.1 com.squareup.okhttp3:okhttp-urlconnection:3.12.1 com.squareup.okhttp3:logging-interceptor:3.12.1.5.0: com.squareup.retrofit2:retrofit:2.5.0 com.squareup.retrofit2:converter-gson:2.5.0 com.squareup.retrofit2:adapter-rxjava2:2.5.0 - RxJava library was updated the the latest version: io.reactivex.rxjava2:rxandroid:2.0.2 io.reactivex.rxjava2:rxjava:2.1.12 - Firebase Cloud Messaging library was updated to the version 17.6.0: com.google.firebase:firebase-messaging:17.6.0
https://docs.onegini.com/msp/stable/android-sdk/upgrade-instructions/9.5.html
2020-02-17T07:42:39
CC-MAIN-2020-10
1581875141749.3
[]
docs.onegini.com
users clicks the button. Make a single press button. The user clicks them and something happens immediately. // Draws 2 buttons, one with an image, and other with a text // And print a message when they got clicked. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Texture btnTexture; void OnGUI() { if (!btnTexture) { Debug.LogError("Please assign a texture on the inspector"); return; } if (GUI.Button(new Rect(10, 10, 50, 50), btnTexture)) Debug.Log("Clicked the button with an image"); if (GUI.Button(new Rect(10, 70, 50, 30), "Click")) Debug.Log("Clicked the button with text"); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/GUI.Button.html
2020-02-17T06:10:14
CC-MAIN-2020-10
1581875141749.3
[]
docs.unity3d.com
Rubicon Java¶ Rubicon Java is a bridge between the Java Runtime Environment and Python. It enables you to: - Use Python to instantiate objects defined in Java, - Use Python to invoke methods on objects defined in Java, and - Subclass and extend Java classes in Python. It also includes wrappers of the some key data types from the Java standard library (e.g., java.lang.String). Table of contents¶ How-to guides¶ Guides and recipes for common problems and tasks, including how to contribute Background¶ Explanation and discussion of key topics and concepts Community¶ Rubicon is part of the BeeWare suite. You can talk to the community through:
https://rubicon-java.readthedocs.io/en/latest/
2020-02-17T06:46:46
CC-MAIN-2020-10
1581875141749.3
[]
rubicon-java.readthedocs.io
Overview of Application Migration You can use Application Migration to migrate applications, such as Oracle Java Cloud Service, SOA Cloud Service, and Integration Classic applications, from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure. Application Migration simplifies the migration of applications from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure. An application is a combination of deployable artifacts and the applied configuration, which can be exported from a service instance running in a source environment and imported into a compatible service instance running on Oracle Cloud Infrastructure. Application Migration performs the following actions: - Connects to a specified Oracle Cloud Infrastructure Classic source environment and authenticates with that source environment using the specified credentials. - Displays a list of applications that can be migrated from the source environment. You can select the application that you want to migrate. - Discovers the artifacts and configuration of the application selected for migration. You can configure this application and provide details required for the application to be set up in the target environment. When the configuration is complete, you can start the migration. - Launches a service instance on Oracle Cloud Infrastructure to host the migrated application. - Deploys the exported artifacts and specified configuration on the new instance in Oracle Cloud Infrastructure. When you're satisfied that your application has been successfully migrated and works as expected, you can delete the corresponding application and associated resources in the source environment. Remember that if the migration results in any application-specific changes such as changes in IP addresses or DNS names, then you might need to update objects that reference those resources. Workflow If you're migrating your applications to Oracle Cloud Infrastructure using Application Migration, follow these tasks as a guide. Prerequisites Before you migrate your applications to Oracle Cloud Infrastructure, complete the following tasks. Ensure that you have access to Application Migration. To enable access to Application Migration in your Oracle Cloud Infrastructure tenancy, contact your Oracle Cloud customer service representative. - You must have an Oracle Cloud Infrastructure Classic account with access to service administrator credentials for the applications that you want to migrate. For example, you must specify a user who has the JAAS JavaAdministrator role to migrate Oracle Java Cloud Service application. - Set up an Oracle Cloud Infrastructure tenancy and ensure that the required networking configuration is complete. - Identify or create a compartment in Oracle Cloud Infrastructure to which you want to migrate the application. - Set up the required policies in Oracle Cloud Infrastructure. See Manage Service Access and Security. - Applications that you want to migrate should be in the running state in the source environment. Also complete the following tasks based on the application that you are migrating: - For Oracle Platform Services services, see Prerequisites for Oracle Platform Services on Oracle Cloud Infrastructure. - For Oracle Java Cloud Service applications, complete the following tasks: - Identify whether your application data is stored in an on-premises database or in database instances that were created in Oracle Cloud Infrastructure Classic. - If application data is stored in database instances that were created in Oracle Cloud Infrastructure Classic, you must migrate the application database to Oracle Cloud Infrastructure before migrating the application. See Migrate the Application Databases. - Set up the required security rules in Oracle Cloud Infrastructure Classic to allow the target's subnet to communicate with the database ports. If the application data is stored in on-premises database, create a security rule to allow connections from the public IP range of the target's subnet. If you have migrated the database to Oracle Cloud Infrastructure, set up security rules to allow the target's subnet to access the application database. - Gather information about the databases that you have migrated to Oracle Cloud Infrastructure. You'll need to provide this information while configuring the target environment. See Get Information About the Target Databases. - For Oracle Process Cloud Service and Oracle Integration Cloud Service applications, see: - For Oracle Analytics Cloud - Classic, you must migrate your users and groups. Review all data connections used by the application. Ensure that they are accessible from Oracle Cloud Infrastructure. Depending on where the data is stored, you might have to migrate the database to Oracle Cloud Infrastructure before you migrate the application. - Migrate users and groups. See Migrate Users and Roles from Oracle Analytics Cloud - Classic. - If you're currently using Oracle Analytics Cloud - Classic to analyze data in an Oracle Database on Oracle Cloud Infrastructure Classic, you must create a database instance on Oracle Cloud Infrastructure and migrate your data. See Select a Method to Migrate Database Instances. - For Oracle Integration, grant service administrator permissions for each service instance. For information about the permissions required for Oracle Integration, see Grant Access and Manage Security. Next: After completing the prerequisites, provide information about the source environment to create an Application Migration source. See To create a source. Supported Applications Use Application Migration to migrate the following applications to Oracle Cloud Infrastructure: - Oracle Java Cloud Service - Oracle SOA Cloud Service - Oracle Analytics Cloud - Classic - Oracle Integration - Oracle Process Cloud Service - Oracle Integration Cloud Service Supported Regions Application Migration is available in US East (Ashburn). Service Limits Application Migration has various default limits. When you create a source or migration in Application Migration, the system ensures that your request is within the bounds of your limit. The limit that applies to you depends on your subscription. The following limits apply to Application Migration resources, such as source and migration. These limits apply to resources in each region. You can submit a request to increase the service limits for Application Migration resources. See Requesting a Service Limit Increase. In addition, applications that you migrate use resources that the Application Migration service creates, such as database instances, Oracle Java Cloud Service instances, SOA Cloud Service instances, Compute instances, and networking resources. These resources are also subject to their respective service limits. For more information about the service limits that apply to other resources, see Service Limits. Service Events Actions that you perform on sources and migrations in Application Migration emit events. You can define rules that trigger a specific action when an event occurs. For example, you might define a rule that sends a notification to administrators when someone migrates an application. See Overview of Events and Get Started with Events. Source Event Types This table lists the source events that you can reference. Example This example shows information associated with the event Source - Create Begin. { "eventType": "com.oraclecloud.applicationmigration.createsource-source", "resourceId": "ocid1.amssource.oc1..<unique_ID>", "availabilityDomain": "<availability_domain>", "freeFormTags": { "Department": "Finance" }, "definedTags": { "Operations": { "CostCenter": "42" } }, "additionalDetails": {} } } Migration Event Types This table lists the migration events that you can reference. Example This example shows information associated with the event Migration - Create Begin: { "eventType": "com.oraclecloud.applicationmigration.createmigration-migration", "resourceId": "ocid1.amsmigration.oc1..<unique_ID>", "availabilityDomain": "availability_domain", "freeFormTags": { "Department": "Finance" }, "definedTags": { "Operations": { "CostCenter": "42" } }, "additionalDetails": {} } } are prompted to enter your cloud tenant, your username, and your password. Application Migration also supports private access from Oracle Cloud Infrastructure resources in a VCN through a service gateway. A service gateway allows connectivity to the Application Migration public endpoints from private IP addresses in private subnets. Set up a service gateway if you are using the REST API or CLI to access Oracle Cloud Infrastructure. If you are using the console to access Oracle Cloud Infrastructure, you need not set up a service gateway. You can optionally use IAM policies to control which VCNs or ranges of IP addresses can access Application Migration. See Access to Oracle Services: Service Gateway for details. users, create and manage the cloud network, and launch instances..
https://docs.cloud.oracle.com/en-us/iaas/application-migration/appmigrationoverview.htm
2020-02-17T06:55:23
CC-MAIN-2020-10
1581875141749.3
[]
docs.cloud.oracle.com
Cold Standby Configuration and Switchover Performance Management Advisors support cold standby High Availability starting in release 8.5.0. Cold standby means you have redundant servers available for each of the nodes in the Platform database's Cluster_Member table for which you require backup, and also for any data adapters (Advisors Genesys Adapter or Advisors Cisco Adapter) configured in the system. When an Advisors component or its host server fails, you switch over to the backup system. You can install the backup system before the primary goes down, or after the primary fails. In either case, after the backup system is installed, you need only make small manual adjustments in the Platform database to replace the primary server with the backup server, and back again. Note that starting in release 8.5.1, Advisors support warm standby HA for certain modules, integrating with Solution Control Server. See Integration with Solution Control Server and Warm Standby. <tabber> Install Redundant Servers= |-| Switchover on a Cluster Node Server= If the primary server, or the platform service on a primary server goes down, use the following procedure to switch over to the redundant system. This procedure assumes the redundant server is installed. See the procedure on the Install Redundant Servers tab if you have not already installed the backup system. |-| Switchover on an Adapter= If a data adapter (Advisors Genesys Adapter or Advisors Cisco Adapter) or its host server fails, use the following procedure to switch over to the redundant adapter/server. To switch over from a backup adapter to the primary adapter again, you use the same procedure, but there is no need to update the inf_genesys_adapter.properties file on the primary server. That server's properties file was not changed during the switch over to the backup adapter; it therefore contains the correct information. |-| HA and Apache Server= If you move any Advisors node to a backup server, you must update the ProxyPass section of the Apache server configuration file (httpd.conf). It is important that you find every instance of the IP address or host name of the system that is being replaced, and change those instances to the IP address or host name of the system that you have configured as the backup. After you complete and save updates to the Apache Server configuration file, stop and then restart the Apache service. |-| HA and RMC= The Supervisor Desktop Service (SDS) server that supports the Resource Management Console (RMC) has no inherent High Availability (HA) capability. Loss of the SDS server requires recovery of the service or machine, or a redundant SDS installation with the same configuration as the existing SDS installation (that is, it must point to the same Configuration Server, Stat Server(s), and TServer(s)), and with the same permission structure. If you transfer from one SDS server to another, you must update the RMCInfo.xml file in the RMC installation to point to the new SDS instance. Instructions are available in the Deploying SDS and RMC section of the Performance Management Advisors Deployment Guide. If your Advisors deployment uses RMC, Genesys strongly advises you to install the CCAdv Web services component into the Advisors Platform instance where the administration workbench is installed because RMC uses objects in both the workbench and in the Web services. RMC cannot connect to both sets of objects if the workbench and Web services are on different servers. If you install RMC with both the administration workbench and CCAdv Web Services, RMC is supported for HA along with the entire node. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PMA/8.5.1/PMADep/PMAHA
2020-02-17T08:08:18
CC-MAIN-2020-10
1581875141749.3
[]
docs.genesys.com
最近更新时间:2020-02-17 15:50:11 KS3 supports to process pictures through native styles, specifically, connecting the style strings via @base@; at the same time, it supports to access via style names: the original image URL@style@style name. There is a test sample picture in our test Bucket, which can be accessed through the link below. You can add a watermark to the picture. On the basis of the above picture, we add a watermark of "Kingsoft Cloud" texts. The result can be obtained in two methods: Method 1: Method 2: For a thumbnail with a length of 100 and a width of 100, with center cropping, two methods can be used to obtain the result: Method 1: Method 2:
https://docs.ksyun.com/documents/28035?preview=1
2020-02-17T07:50:24
CC-MAIN-2020-10
1581875141749.3
[]
docs.ksyun.com
Configuring kerberos authentication on the NetScaler appliance This topic provides the detailed steps to configure Kerberos authentication on the NetScaler appliance by using the CLI and the GUI. Configuring Kerberos authentication on the CLIConfiguring Kerberos authentication on the CLI Enable the AAA feature to ensure the authentication of traffic on the appliance. ns-cli-prompt> enable ns feature AAA Add the keytab file to the NetScaler appliance. A keytab file is necessary for decrypting the secret received from the client during Kerberos authentication. A single keytab file contains authentication details for all the services that are bound to the traffic management virtual server on the NetScaler appliance. First generate the keytab file on the Active Directory server and then transfer it to the NetScaler appliance. Log on to the Active Directory server and add a user for Kerberos authentication. For example, to add a user named “Kerb-SVC-Account”: net user Kerb-SVC-Account freebsd!@#456 /add Note In the User Properties section, ensure that the “Change password at next logon option” is not selected and the “Password does not expire” option is selected. Map the HTTP service to the above user and export the keytab file. For example, run the following command on the Active Directory server: ktpass /out keytabfile /princ HTTP/[email protected] /pass freebsd!@#456 /mapuser newacp\dummy /ptype KRB5_NT_PRINCIPAL Note You can map more than one service if authentication is required for more than one service. If you want to map more services, repeat the above command for every service. You can give the same name or different names for the output file. Transfer the keytab file to the NetScaler appliance by using the unix ftp command or any other file transfer utility of your choice. Log on to the NetScaler appliance, and run the ktutil utility to verify the keytab file. The keytab file has an entry for the HTTP service after it is imported. The kutil interactions are as follows: root@ns# ktutil ktutil: rkt /var/keytabfile ktutil: list slot KVNO Principal -——————————————————————- ktutil: wkt /etc/ krb5.keytab ktutil: list slot KVNO Principal -— —- —————————————————————- 1 2 HTTP/[email protected] ktutil: quit The NetScaler appliance must obtain the IP address of the domain controller from the fully qualified domain name (FQDN). Therefore, Citrix recommends configuring the NetScaler appliance with a DNS server. ns-cli-prompt> add dns nameserver <ip-address> Note Alternatively, you can add static host entries or use any other means so that the NetScaler appliance can resolve the FQDN name of the domain controller to an IP address. Configure the authentication action and then associate it to an authentication policy. Configure the negotiate action. ns-cli-prompt> add authentication negotiateAction <name> -domain <domainName> -domainUser <domainUsername> -domainUserPasswd <domainUserPassword> Configure the negotiate policy and associate the negotiate action to this policy. ns-cli-prompt> add authentication negotiatePolicy <name> <rule> <reqAction> Create an authentication virtual server and associate the negotiate policy with it. Create an authentication virtual server. ns-cli-prompt> add authentication vserver <name> SSL <ipAuthVserver> 443 -authenticationDomain <domainName> Bind the negotiate policy to the authentication virtual server. ns-cli-prompt> bind authentication vserver <name> -policy <negotiatePolicyName> Associate the authentication virtual server with the traffic management (load balancing or content switching) virtual server. ns-cli-prompt> set lb vserver <name> -authn401 ON -authnVsName <string> Note Similar configurations can also be done on the content switching virtual server. Verify the configurations by doing the following: Configuring Kerberos authentication on the GUIConfiguring Kerberos authentication on the GUI Enable the AAA feature. Navigate to System > Settings, click Configure Basic Features and enable the AAA feature. Add the keytab file as detailed in step 2 of the CLI procedure mentioned above. Add a DNS server. Navigate to Traffic Management > DNS > Name Servers, and specify the IP address for the DNS server. Configure the Negotiate action and policy. Navigate to Security > AAA - Application Traffic > Policies > Authentication > Advanced Policies > Policy, and create a policy with Negotiate as the action type. Bind the negotiate policy to the authentication virtual server. Navigate to Security > AAA - Application Traffic > Virtual Servers, and associate the Negotiate policy with the authentication virtual server. Associate the authentication virtual server with the traffic management (load balancing or content switching) virtual server. Navigate to Traffic Management > Load Balancing > Virtual Servers, and specify the relevant authentication settings. Note Similar configurations can also be done on the content switching virtual server. Verify the configurations as detailed in step 7 of the CLI procedure mentioned above.
https://docs.citrix.com/en-us/netscaler/11-1/aaa-tm/ns-aaa-config-protocols-con/ns-aaa-config-protocols-krb5-ntlm-intro-con/kerberos-config-on-netscaler.html
2019-11-12T09:55:32
CC-MAIN-2019-47
1573496664808.68
[]
docs.citrix.com
Windows. Building ARM64 Win32 C++ Apps Learn how to install the ARM64 tools for Visual Studio. Then we’ll walk you through the steps of creating and compiling a new ARM 64 project. Build 2018 Windows 10 on ARM for developers Learn about the Windows 10 on ARM devices, how the magic of x86 emulation works, and finally how to submit and build apps for Windows 10 on ARM. We will be showing how to build ARM64 apps for desktop and UWP. Windows Community Standup with Kevin Gallo Get deep understanding of how Windows 10 runs on ARM64, and get a feel for apps and experiences on this platform. Understanding Windows 10 on ARM Get to know the platform by looking at these resources. Developing for Windows 10 on ARM Start tailoring your apps to Windows 10 on ARM and take advantage of the features available there. Let us know if you have feedback We are continuously improving our product by leveraging feedback from you and our existing customers. If you have an idea, are stuck on a problem, or just want to share how great your experience is, these links will help you. Use the feedback hub Did we miss something? Do you have a great idea? Let us know in the Feedback Hub. Report a bug Found a bug in our platform? Email us with the details. Give doc feedback Have you found an issue with our docs? Do you want us to make something clearer? Create an issue on our docs GitHub repo.
https://docs.microsoft.com/en-us/windows/arm/
2019-11-12T09:26:15
CC-MAIN-2019-47
1573496664808.68
[]
docs.microsoft.com
[−][src]Crate contralog Composable logging with monoids and contravariant functors. A logger is a routine that takes input and has side-effects. Any routine that has the appropriate type will do. A logger can be seen as the opposite or dual of an infinite iterator. The core trait of this crate is Logger. It has only a single method that must be implemented: log. To log something, pass it to this method. It is up to the logger to decide what to do with the value. Loggers are composable: given two loggers with compatible types, a new logger can be created that forwards its input to both loggers. Loggers can also be transformed using methods such as map and filter.
https://docs.rs/contralog/0.0.1/contralog/
2019-11-12T09:10:42
CC-MAIN-2019-47
1573496664808.68
[]
docs.rs
Teamstudio Adviser 6.2 - Release Notes The current version, Adviser 6.2.0, is a feature release of Teamstudio Adviser. Please read the installation guide (available here) carefully before installing. Upgrading from a previous version of Adviser 6 Upgrading to Adviser 6.2 involves refreshing or replacing the design of the server database, updating the workstation to the new command-line module and restarting the server's HTTP task. See Upgrading Adviser on the installation page for a detailed list of steps. Adviser and Usage Auditor 5.x Upgrading from Usage Auditor 5.x. Running Adviser and Usage Auditor 5.x in Parallel If you are evaluating Adviser, you may want to run Adviser and Usage Auditor in parallel. This is supported but you will need to run them on separate workstations to avoid conflicts between the two products. Known Issues -) - Allowing the browser to "save password" may cause errors in the browser UI on some browsers/platforms. See Browsers and Saved Passwords on the installation page for details. - Adviser requires limited anonymous http access to the server from the workstation. If you are unable to enable anonymous access, please contact Teamstudio technical support. Fix List Adviser 6.2.0 (Build 408) [TMS-971] - Fix invalid error message when license is expired [TMS-973] - Replace Adviser Workstation NSF with a separate executable Workstation application [TMS-974] - Add ability to disable Effective Access (pending performance improvements) Adviser 6.1.4 (Build 381) [TMS-964] - Inactivity timeout on server/servlet causes NPE in ModuleClassLoader [TMS-965] - Complexity can loop trying to reprocess an invalid document Adviser 6.1.3 (Build 380) [TMS-961] - Complexity fails with NPE if a design element cannot be accessed [TMS-963] - Include debug information in java classes Adviser 6.1.2 [TMS-930] - Display version number in the web UI [TMS-935] - Add columns to Users/All view in Usage showing Notes Client and Web usage totals for each user [TMS-937] - Make server logs available in the web UI [TMS-938] - Support filtering and deleting in server logs [TMS-939] - Prevent caching Web UI resources between releases [TMS-940] - Display version number to workstation logging [TMS-941] - Fix missing memory recycle during catalog import [TMS-942] - Add HTTP Authentication support to workstation (command line only) [TMS-943] - Usage scan task can error with server not responding [TMS-948] - Capture Adviser version & Domino/OS info in log documents [TMS-949] - Usage detail data counts can be incorrect for databases with web and client usage [TMS-950] - Group by filter display is incorrect [TMS-952] - Support for rebuilding Usage SQL database [TMS-953] - Missing translation for 'Unknown' guidance value Adviser 6.1.1 [TMS-917] Add “By Last Access” grouping to Usage Databases list [TMS-920] Max Enabled Servers Exceeded warning shows incorrectly for licenses with 1 server [TMS-922] Effective Access fails when recursive groups are present [TMS-923] Additional improvements to memory management [TMS-924] Catalog Scan fails on some multiple-server catalogs [TMS-926] ACL import should create server names in canonical form [TMS-928] REST services should compress data sent to browsers [TMS-929] Complexity enabled/disabled setting shows incorrect value Adviser 6.1.0 [TMS-807] Add https and port number options for Workstation invoke of Job service [TMS-821] Errors scheduling server-side jobs should be logged [TMS-826] Support for Domino 8.5.3 [TMS-827] View databases sorted/categorized by business value [TMS-835] Filter out named users in Usage [TMS-836] Incorrect dates are shown in Usage User details panels [TMS-837] Erroneous "item is no longer in this view" when using back button [TMS-864] Add Usage Users - By Last Access "Never" [TMS-865] Changing Business Value should update Guidance Details panel [TMS-879] Server scan should ignore and log address books that can't be opened [TMS-880] Improve memory handling for documents processed in enumerations [TMS-881] Improve ACL info in Catalog Dbs [TMS-884] Redesign settings -> default servers functionality to better handle many servers / slow loading data [TMS-890] Improve memory handling in lookup / find operations where possible [TMS-893] Show server when viewing DB details [TMS-894] Initial load of web app can cause ConcurrentModificationException on server console [TMS-895] Catalog overview uses incorrect icons for recent scans [TMS-897] Missing last access date for DBs in Usage User details [TMS-904] Complexity keyword (search) terms not always scored Adviser 6.0.2 [TMS-883] Fix issue where Settings page fails to function properly when a Domino Directory cannot be opened while searching for server names; Adviser 6.0.2 logs information about the error and continues loading servers from any additional directories. Adviser 6.0.1
http://docs.teamstudio.com/display/TEADOC062/Release+Notes
2019-11-12T07:49:20
CC-MAIN-2019-47
1573496664808.68
[]
docs.teamstudio.com
Items are identified in an ItemLookup request using two parameters: ItemId and IdType. The second parameter specifies the value type of the first parameter. IdType can be: UPC—(Universal Product Code) A 12-digit item identifier. The UPC is the identifier used in barcodes. (Not valid in the CA locale.) EAN—(European Article Number) A 13-digit equivalent of the UPC that is used in Europe for products and barcodes. JAN—(Japanese Article Number) The equivalent of the EAN that is used in Japan for products and barcodes ISBN—(International Standardized Book Number) An alphanumeric token that uniquely identifies a book. A book's EAN is typically set equal to the book's ISBN SKU—(Stock Keeping Unit ) A merchant-specific identifier for a purchasable good, like a shirt or chair. Amazon's version of the SKU is the ASIN. The following table shows the valid identifiers by locale. The default value of IdType is ASIN. For non-ASIN searches, including searches by ISBN, JAN, SKU, UPC, and EAN, a variety of additional parameters become mandatory, including a value for IdType.
http://docs.amazonwebservices.com/AWSECommerceService/2008-03-03/DG/ItemIIdentifiers.html
2008-05-11T22:38:12
crawl-001
crawl-001-002
[]
docs.amazonwebservices.com
By Jim Mackin | May 3, 2018 Like a lot of SuiteCRM the field types are customisable and you can add your own types of fields. This post will explain how to add a colour picker as a custom field type. By pgorod | April 14, 2017 SuiteCRM uses a number of Scheduler jobs that are supposed to run at scheduled times, supporting functionalities like search indexing, workflows, email notifications, database maintenance, etc. By Jim Mackin | March 4, 2016 Sometimes you will want to format fields that are shown on the list view depending on the values. For example you may want to colour the direction field on calls differently or to highlight quotes that expire soon. By Jim Mackin | August 25, 2015 Alerts in the menu bar are an excellent way of calling user’s attention. What if you what to create a custom alert through code? It’s easy, here’s how to do it.
https://docs.suitecrm.com/blog/
2020-07-02T09:11:39
CC-MAIN-2020-29
1593655878639.9
[]
docs.suitecrm.com
! Operations supported on individual cluster nodes As a rule, Citrix ADC appliances that are a part of a cluster cannot be individually configured from their NSIP address. However, there are some operations that are an exception to this rule. These operations, when executed from the NSIP address, are not propagated to other cluster nodes. The operations are: - - - - - - - force cluster sync - sync cluster files - disable ntp sync - save ns config - reboot - shutdown For example, when you execute the command disable interface 1/1/1 from the NSIP address of a cluster node, the interface is disabled only on that node. Since the command is not propagated, the interface 1/1/1 remains enabled on all the other cluster nodes. Operations supported on individual.
https://docs.citrix.com/en-us/citrix-adc/13/clustering/cluster-exceptional-commands.html
2020-07-02T10:34:01
CC-MAIN-2020-29
1593655878639.9
[]
docs.citrix.com
Adding UI for Silverlight to Visual Studio 2015 Toolbox The following tutorial will show you how to add UI for Silverlight controls to Visual Studio 2015 toolbox. Adding UI for Silverlight to Visual Studio 2015 Toolbox To manually add Telerik UI for Silverlight to the Visual Studio 2015 Toolbox, follow the steps below: Open your application in Visual Studio 2015. Expand the Toolbox (View->Toolbox or use the shortcut Ctrl+Alt+X). Right-mouse click in the toolbox area and choose "Add Tab" from the context menu. Add a new tab with name "UI for Silverlight". Select the "UI for Silverlight" tab in the toolbox. Right-mouse click and select "Choose Items...". In the "Choose Toolbox Items" dialog, go to the "Silverlight Components" tab and click "Browse...". Navigate to the folder where the binaries are located. Select the DLL you want to import and click OK or press Enter. Press OK to include the controls in your toolbox, or filter the controls you want to add. Expand your toolbox. You will see the newly added controls in the "UI for Silverlight" section. After clicking the OK button of the "Choose Toolbox Items", it is possible that the "UI for Silverlight" tab in the toolbox may be hidden. If that happens, move the mouse pointer over the Toolbox area and right-click the mouse and then select the Show All command from the shortcut menu.
https://docs.telerik.com/devtools/silverlight/integration/installation-adding-to-vs-toolbox-silverlight
2020-07-02T09:26:54
CC-MAIN-2020-29
1593655878639.9
[]
docs.telerik.com
Introduction Welcome to the API documentation for Maps4News. Our API allows you to manage everything about your Maps4News account and your organisation. As well as generate Maps. Authentication To authorize, use this code: import { ImplicitFlow, Maps4News } from '@mapcreator/maps4news'; const API_CLIENT_ID = 0; const API_HOST = ''; const REDIRECT_URL = ''; const auth = new ImplicitFlow(API_CLIENT_ID, REDIRECT_URL); const api = new Maps4News(auth, API_HOST); // Somewhere in your application api.authenticate(); // Get the user's information api.users.get('me').then(console.log); This example uses the guzzlehttp package from Composer. <?php $host = ""; $client_id = 0; $secret = "secret"; $redirect_uri = ""; //////////////////////////// // /login route in your app. // Prepare redirect to API login page. $query = http_build_query([ 'client_id' => $client_id, 'redirect_uri' => $redirect_uri, 'response_type' => 'code' ]); // Redirect user. header("Location: $host/oauth/authorize?$query"); ////////////////////////////// // /callback route in your app. $http = new GuzzleHttp\Client(); // Get the user's access_token. $response = $http->post("$host/oauth/token", [ 'form_params' => [ 'grant_type' => 'authorization_code', 'client_id' => $client_id, 'client_secret' => $secret, 'redirect_uri' => $redirect_uri, 'code' => $_POST['code'] ] ]); // Get the access token. $token = json_decode((string) $response->getBody(), true)['access_token']; // Request the user's info. $response = $http->get("$host/v1/users/me", [ 'headers' => [ 'Authorization' => "Bearer $token", 'Accept' => 'application/json' ] ]); // Display the user's information print_r(json_decode((string) $response->getBody())); Make sure the client_id, hostand redirect_urlare correctly filled in. The Maps4News API is an OAuth2 API. We support implicit and password flows. API To register an OAuth Client or Personal Access Token, please log into the API and register one via your account settings. Have a look at our OpenAPI spec, the spec contains all the endpoints & info about how resources look and what each endpoint requires you to submit. To Log in and try it out hit the "Try out" button. Return Data For success responses { "success": true, "data": { ... } } For error responses { "success": false, "error": { "type": "HttpNotFoundException", "message": "Page Not Found" } } For error responses with validation errors { "success": false, "error": { "type": "ValidationException", "message": "Input data failed to pass validation", "validation_errors": { "attribute": [ "validation error for the attribute" ] } } } For error responses with JSON schema errors (current only used when creating a Job Revision) { "success": false, "error": { "type": "ValidationException", "message": "Input data failed to pass validation", "validation_errors": { "attribute": [ "validation error for the attribute" ] }, "schema_errors": [ { "property": "data.meta", "pointer": "/data/meta", "message": "The propery meta is required", "constraint": "required", "context": 1 } ] } } All JSON responses from the API are wrapped in a base object. Be sure to include an Accept: application/json header, otherwise, errors like 401, 403 & 404 will either return HTML or redirect you to the login page. Headers Exposed Headers Content-Type( application/jsonor the filetype, e.g. image/png) Content-Disposition(only for files, defaults to inline) For Pagination See pagination X-Paginate-Total X-Paginate-Pages X-Paginate-Offset For HTTP Caching Create Route /v1/users Update Route /v1/users/1 All returned model resources have an ETag and Last-Modified header. ETag headers are returned from Get, Create & Update requests. Because the ETags are weak they can also be used on other routes. For example, when getting a resource the API will return an ETag header, the value of that ETag header can be used on the update route prevent the lost update problem. Exposed CORS Headers Access-Control-Allow-Origin(default *) Access-Control-Allow-Methods Access-Control-Allow-Headers Access-Control-Expose-Headers Access-Control-Max-Age Accepted Headers Authorization Accept(should be set to application/jsonfor all API requests) Content-Type X-No-CDN-Redirect(tells the API to not redirect the user to the CDN but instead fetch the item itself, default false) For Pagination See pagination X-Page X-Per-Page X-Offset For Midair Collision Prevention We follow the standard as described on the Mozilla Developer Network. If you submit any of these headers the API will assume you only want to update a resource when the header condition is met, omit these if you do not care about preventing the lost update problem Query Parameters The API has a few query parameters available that you can use to help find the resources you need. All of these query parameters are only available on listing endpoints, so endpoints that return an array of items. Pagination As Query Parameter ?page=1&per_page=50&offset=0 As Header X-Page: 1 X-Per-Page: 50 X-Offset: 0 By default, the API returns 12 items per page and defaults to page 1. The number of items per page can be increased to a maximum of 50 items. Offset offset is a special parameter within our pagination system, the offset will remove the first n items from the list you are querying. offset can be used to work around getting duplicate data. So, for example: if the list has 600 items and the offset is set to 100, the X-Paginate-Total will report 500 items, other headers like X-Paginate-Pages will also be calculated from the new total. Sorting Sort ID descending and name ascending ?sort=-id,name The API supports sorting ascending or descending sorting on multiple columns (separated by a comma) on the resources. Sortable columns are whitelisted inside the API, look in the model list below for supported columns Searching Search for name LIKE "Kevin" and company that ends with "4News" ?search[name]=Kevin&search[company]=$:4News Searching can be done on multiple columns, we use the URL array syntax for this. The basic syntax is operator:value, so: =:4News The same is for searchable columns, these are whitelisted per resource The available operators are: !: Not operator =: Equals operator >: Bigger than operator <: Smaller than operator >=: Bigger than or equals operator <=: Smaller than or equals operator ^: Starts with operator $: Ends with operator ~: Or no operator, that will result in a LIKEstatement Keywords There are a few keywords throughout the API that you can use in the URL as shortcuts to certain resources. GET /v1/users/me For example, you can use me as a keyword for a user. This will return the resource of the user that that is associated with the token used to make the request. GET /v1/organisations/mine A manager can use the mine keyword to get a list of organisations he/she manages. GET /v1/jobs/1/revisions/last To get the last revision for a job, you can use the last keyword. Wrapper You can install the library using: npm install @mapcreator/maps4news If you are using JavaScript to develop your app then you are in luck. We have created a query builder-like library that is able to do everything our API offers. It even does the OAuth login for you, in redirect, popup or password flow. The library is freely available on github and npm. Have a look at the Wrapper's ESDoc API documentation. Installation // Using npm npm install --save @mapcreator/maps4news Installation can be done either through a node package manager, such as npm or yarn, or by including the browser bundle. NodeJS var m4n = require('@mapcreator/maps4news'); // Do stuff var auth = new m4n.ImplicitFlow(1); var api = new m4n.Maps4News(auth); After installation the package can be imported as follows ES6 import { Maps4News, DummyFlow } from '@mapcreator/maps4news'; // Do stuff var auth = new DummyFlow(); var api = new Maps4News(auth); Or when using ES6 import statements Browser Script Tag <script src=""></script> This html tag can be used without any other dependency in your html. const { Maps4News, DummyFlow } = window.maps4news; // Do stuff var auth = new DummyFlow(); var api = new Maps4News(auth); You can also include the wrapper via a script tag in your html file. Authentication Authentication is done through OAuth. This library provides multiple OAuth flow implementations for authentication. The client will first check if any tokens can be found in the cache before requiring authentication. If one can be found the api.authenticate() method will instantly resolve without any side-effects. The variable api.authenticated will be set to true if a token has been found and is still valid. Tokens are stored in HTTPS cookies if possible and using localStorage when the browser is not using a HTTPS connection. NodeJS uses a file named .m4n_token to store the token. Authentication Web Multiple flows are supported for web browsers. All the web examples assume the web build of the library has been included in the target page. Machine token const token = "..."; const api = new Maps4News(token); A machine token can be used directly while instantiating the api instance. Implicit flow // Obtained client id var clientId = 1; // Callback url is set to the current url by default var auth = new ImplicitFlow(clientId); var api = new Maps4News(auth); // This will hijack the page if no authentication cache can // be found. Smartest thing to do is to just let it happen // and initialize any other code afterwards. api.authenticate().then(function() { // Save the token api.saveToken(); // Get the current user and dump the result to the console. api.users.get('me').then(console.dir); }); A client id is required to use the implicit flow. The redirect url must be the same as the one linked to the client id. The callback url is automatically guessed if none is provided. Implicit flow pop-up index.html var clientId = 1; var callbackUrl = ''; var auth = new ImplicitFlowPopup(clientId); var api = new Maps4News(auth); api.authenticate().then(function() { // Save the token api.saveToken(); // Get the current user and dump the result to the console. api.users.get('me').then(console.dir); }); callback.html <html><body> <h1>Nothing to see here 👻</h1> </body></html> This will create a pop-up window containing the login page. Once the pop-up redirects back to the callback it will resolve the promise. The callback can be an empty page hosted on the same domain. Callback url is set to the current url by default. The script is smart enough close the page if it detects that it's a child after authentication. This means that either the current page can be set as the callback (default) or a blank page. The callback must be hosted on the same domain as the application to allow for cross window communication. Dummy flow var auth = new DummyFlow(); var api = new Maps4News(auth); // Manually check if we're logged in if (api.authenticated) { console.log('Found authentication token in cache!'); } api.authenticate().then(function() { // Will only resolve if a token was found console.log("We're authenticated"); }).catch(function(err) { // This will be called if `api.authenticated` is false console.log(err.toString()); }); The dummy flow can be used when a token should be present in the cache. Basics These examples assume that an instance of the api exists and is authenticated. See the node and web authentication examples for more information on authenticating. const me = await api.users.get('me'); const colors = await me.colors.list(); Which is the same as const colors = await api.users.select('me').colors.list(); The wrapper exposes relations which return proxies. These proxies can be used to either build a route to a resource or to fetch resources. This means that api.users.get('me') is the same as calling the route /v1/users/me. All proxies expose the methods new, list and lister. Most proxies expose the methods select and get. Async methods return a Promise this means that both then/catch and await/async syntax are supported. // Case translation const data = { foo_bar_baz: 123 }; const test = api.static().new(data); test.fooBarBaz === 123; // true The wrapper will transform snake_case named variables returned from the api into camelCase named variables. This means that for example place_name will be transformed into placeName. Getting a resource Fetch resource and all its properties api.colors.get(1).then(function(color) { console.log(color.id + " " + color.name + ": " + color.hex); }); Select the current user to quickly obtain related mapstyle sets api.users.select('me').mapstyleSets.list().then(function(sets) { for (const set of sets.data) { console.log(`[${set.id}] ${set.name}`); } }); Resources are bound to the base api class by default. Resources can be fetched in two ways; by selecting them ( .select) or by fetching them ( .get). Selecting them will only set the object's id to its properties. Fetching a resource returns a Promise that will resolve with the requested resource. Selection is only useful as a stepping stone to related resources that can be easily obtained using the id of the parent. Please refer to the api documentation for further reference. Create a new resource var data = { name: 'Smurf', hex: '88CCFF' }; api.colors.new(data).save().then(console.dir); Create a new color and dump the new resource to the console after saving Modify a resource api.users.get('me').then(me => { me.profession = 'Developer'; me.save(); // Optional chaining to get the updated resource }); Change profession of the current user and save it. Clone a resource api.colors.get(1).then(color => { color.id = null; color.save(); }); Setting the id to null forces the creation of a new object upon saving. Pagination Listing resources with pagination. First page with 5 items per page api.colors.list(1, 5).then(page => { console.log('Got resources:'); for (var i = 0; i < page.data.length; i++) { console.log(page.data[i].toString()); } }); Loop over every page and print the result to the console function parsePages(page) { for (var i = 0; i < page.data.length; i++) { console.log(page.data[i].toString()); } if (page.hasNext) { console.log('Grabbing page ' + (page.page + 1)); page.next().then(parsePages); } } api.colors .list(1, 50) .then(parsePages); Loop over all pages and return the data in a promise function parsePages(page) { var data = []; function parse(page) { data = data.concat(page.data); if(page.hasNext) { return page.next().then(parse); } else { return data; } } return parse(page); } api.colors .list(1, 50) .then(parsePages) .then(d => console.log('Total rows: ' + d.length)); Select current user but do not fetch any info to make fetching resources easier api.users.select('me').colors.list().then(page => { console.dir(page.data); }); warning: The paginatedResourceListing is in the progress of being deprecated. Searching Resource lists can be queried to search for specific records as follows var query = { name: '^:test', scale_min: ['>:1', '<:10'], } api.layers.search(query).then(console.dir); deprecated - Will change soon. The search method is an extension of list. This means that .search({}) is the same as list(). More information about search query formatting can be found in the api documentation. Examples Building a Map Prerequisites: - You have an authenticated Wrapper instance or Token that you can use for authentication. Notes: - For JS this example uses our API Wrapper - For PHP this example uses GuzzleHttp - We're gonna build the map defined in this json file To build a map via our system, you first need to create a few resources. const api = new Maps4News(token); // 1. Job const job = await api.jobs.new({ jobTypeId: 7, title: 'My Map' }).save(); // 2. Job Revision import * as mapObject from './map.json'; // NodeJS const revision = await job.revisions.new({ languageCode: 'eng', mapstyleSetId: 60 // Here Mapstyle }).save(mapObject); // 3. Building const build = await revision.build(''); // 4. Job Result const result = await revision.result(); // 5. Getting the preview const preview = await result.downloadPreview(); window.location = preview; <?php $http = new GuzzleHttp\Client([ 'base_uri' => '', 'headers' => [ 'Authorization' => "Bearer $token", 'Content-Type' => 'application/json', 'Accept' => 'application/json', ], ]); // 1. Job $jobResponse = $http->post('v1/jobs', [ GuzzleHttp\RequestOptions::JSON => [ 'job_type_id' => 7, // API Map 'title' => 'My Map', ], ]); $job = json_decode($jobResponse->getBody()); // 2. Job Revision $mapObject = file_get_contents('./map.json'); // As a string $revisionResponse = $http->post("/v1/jobs/$job->id/revisions", [ GuzzleHttp\RequestOptions::JSON => [ 'language_code' => 'eng', 'mapstyle_set_id' => 60, // Here Mapstyle 'object' => $mapObject, ], ]); $revision = json_decode($revisionResponse->getBody()); // 3. Building $buildResponse = $http->post("/v1/jobs/$job->id/revisions/$revision->revision/build"); $build = json_decode($buildResponse->getBody()); // 4. Job Result $resultResponse = $http->get("/v1/jobs/$job->id/revisions/$revision->revision/result"); $result = json_decode($resultResponse->getBody()); // 5. Getting the Preview $previewResponse = $http->get("/v1/jobs/$job->id/revisions/$revision->revision/result/preview"); header('Content-Type: image/png'); echo $previewResponse->getBody(); Steps - 0. Let's setup the basis for our application. - 1. Firstly we are gonna create a Jobinstance. A Job is a project on the Maps4News platform. We're gonna create an Annotation Map ( job_type_id) which is a normal map with icons on it, and we're also giving out our map a title. - 2. Second a Job Revision. A Revision is a point-in-time that the user decided to save his/her current progress in designing their map. A Revision requires us to give it a language_code, these are 3 character strings, (eng, ger, ita, dut, etc.) as well a mapstyle_set_id and the map json as a string. (A list of available mapstyle sets can be gotten from /users/me/mapstyle-sets) A map object must be given to each revision. Revisions can not be updated, each save will result in a new revision. Details about how to make a map object can be found on the map object page. - 3. If your map object was valid and the revision was created we can queue a build of your map. This will create a JobResultresource for that revision. - 4. You can access your result via the resultmethod on the revision. Expect your result to be queued or processing if you get your result directly after queueing a build. It generally takes a few seconds to a few minutes to generate a map. - 5. The last step in this example is to get the preview image for the map. The API will return an image/pngfor all previews. The Final Result
https://docs.maps4news.com/v1/index.html
2020-07-02T09:06:20
CC-MAIN-2020-29
1593655878639.9
[array(['../images/map_preview.png', 'Map Preview'], dtype=object)]
docs.maps4news.com
Connector Migration Guide - DevKit 3.6 to 3.7 July 8, 2015 Migrating from DevKit 3.6 to 3.7 The sections that follow list DevKit changes between versions 3.6.n and 3.7.0. New Connector Functional Testing Framework The goal of this new framework is twofold. In the first place, it eases the test development phase by decoupling Mule flows with the test logic itself: no flow references are now defined within flows and therefore a notion of Mule flows is not required from the test developer side. In the second place, it now allows connector tests to run in different Mule versions, either locally or in CloudHub in an automatic manner. As a result, we can now test connector code against multiple Mule versions, assuring backward-compatibility, forward-compatibility, and library-compatibility. Old Mule Connector Test For DevKit 3.6.n and Previous Versions Consider the following when running a test with the old test framework. Let’s consider an example with the Salesforce connector test suite. Extend from ConnectorTestCase. This class brings up methods such initializeTestRunMessage, runFlowAndGetPayload, or upsertOnTestRunMessage. These methods let you load test data through Spring beans, run a flow, get the resulting payload, and add data to a common test data container, respectively. package org.mule.modules.salesforce.automation.testcases; import org.mule.modules.tests.ConnectorTestCase; public abstract class AbstractTestCase extends ConnectorTestCase { } Load test data and set up the test context by loading test data through initializeTestRunMessage(springName), running a particular flow by means of runFlowAndGetPayload(flowName), and keeping the resulting value with upsertOnTestRunMessage(key,value). These methods require a Spring bean definition file, normally called automationSpringBeans.xml and a Mule application flows file, normally called automation-test-flows.xml. package org.mule.modules.salesforce.automation.testcases; ... public class AbortJobTestCases extends AbstractTestCase { @Before public void setUp() throws Exception { initializeTestRunMessage("abortJobTestData"); JobInfo jobInfo = runFlowAndGetPayload("create-job"); upsertOnTestRunMessage("jobId", jobInfo.getId()); } ... Execute the test, where different flows can be called by means of runFlowAndGetPayload(flowName), runFlowAndExpectProperty(flowName, propertyName, expectedObject), or runFlowWithPayloadAndExpect(flowName, expectedObject, payload), among other available methods. @Category({RegressionTests.class}) @Test public void testAbortJob() { try { JobInfo jobInfo = runFlowAndGetPayload("abort-job"); assertEquals(com.sforce.async.JobStateEnum.Aborted, jobInfo.getState()); assertEquals(getTestRunMessageValue("jobId").toString(), jobInfo.getId()); assertEquals(getTestRunMessageValue("concurrencyMode").toString(),jobInfo.getConcurrencyMode().toString()); assertEquals(getTestRunMessageValue("operation").toString(),jobInfo.getOperation().toString()); assertEquals(getTestRunMessageValue("contentType").toString(), jobInfo.getContentType().toString()); } catch (Exception e) { fail(ConnectorTestUtils.getStackTrace(e)); } } } Take-Away From the Pre-3.7 Test Framework The following is how a normal test looks like with the pre-3.7 test framework, where we can observe two things. On the one hand, we have the test data in a Spring bean file, which normally looks like this: <beans xmlns="" ...> <context:property-placeholder <util:map <entry key="type" value="Account" /> <entry key="concurrencyMode" value="Parallel" /> <entry key="contentType" value="XML" /> <entry key="externalIdFieldName" value="Id" /> <entry key="operation" value="insert" /> </util:map> </beans> This Spring file gathers all test data used through out the entire test execution phase. On the other hand, we have a* Mule application flows* file, which looks like this: <mule xmlns=""...> <context:property-placeholder <sfdc:config <sfdc:create-job</sfdc:create-job> </flow> <flow name="abort-job" doc: <sfdc:abort-job </sfdc:abort-job> </flow> This Mule application flows file defines the way a Salesforce operation (keeping in mind we are working with Salesforce as an example) is executed. A flow defines a particular operation to be carried out, a name, the connector configuration to be used and every parameter for that particular operation. A Mule application is formed by flows, which are define in one (or many) Mule application flows file. Therefore, in order to run a test (or a battery of tests), define a Spring bean file along with a flow files, mostly disaggregating test data, methods to be run and the logic of the test itself. It becomes virtually impossible to understand a test by simple reading a test class without either the Spring file or the flows file. The goal of the new connector test framework is to make a test self-contained, decoupling the test from Mule flows and Spring beans. You should have a minimum understanding of how Mule runs, keeping the test data within the test itself (or closed enough). The next section introduces the new connector test framework along with its features. We additionally show different use cases, including features such as pagination or Mule DataSense. Migration Guideline to the New Framework Migration from the previous Mule Connector Test approach to this new framework has been carefully thought and as a result we have easy-to-follow migration guidelines. Iterative Migration We strongly advise connector developers to move current connector tests to a legacy package. For example, if you currently have a package named org.mule.modules.connector.automation.testcases, rename it to org.mule.modules.connector.automation.testcases.legacy. Then create a package org.mule.modules.connector.automation.testcases, as before. This newly created package now contains every migrated test. Test resources are likely to be used within the migrated tests and therefore we advise to leave these resources as they are, normally within src/test/resources. Some tests might not be migrated, either due to framework limitations or to developer choices. If framework limitations or problem arise during migration, inform Mule Support. Take in mind that we currently do not pack the old framework Maven dependency required to run the legacy test suite. Said that, if you maintain the legacy suite is required to manually add the dependency in the pom.xml file. <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-connector-test</artifactId> <version>2.0.7</version> <scope>test</scope> </dependency> Calling a Connector Method Versus a Mule Flow The major change from Mule Connector Test to this new test framework is how operations are called and executed. Let’s consider the following example. ... initializeTestRunMessage("sampleTestCaseData"); JobInfo jobInfo = runFlowAndGetPayload("create-job"); upsertOnTestRunMessage("jobId", jobInfo.getId()); ... We first need to load the test data by means of a Spring bean, called sampleTestCaseData, defined in an external Spring beans file. Next, we need to run a Mule flow, called create-job, defined as well in an external file. Finally, we need to add to a common data container the recently obtained job identifier for a later use. This require to understand Spring beans, Mule flows and three different methods from ConnectorTestCase to execute a simple create job operation. We have radically changed this approach. We have simplified the way a test developer writes a test by enabling direct access to the operations of a connector. Only special operations, such as paginated ones, require alternative methods. Considering the same example as before, we now have a simplified interface, considering that we already have a connector mockup instance, as follows: ... JobInfo jobInfo = connector.createJob(OperationEnum.insert, "Account", "Id", ContentType.XML, ConcurrencyMode.Parallel); The main characteristic is that the concept of Mule flows disappears and test data is bundled within the test itself. Test Data Management Test data is currently maintained within Spring beans. We encourage to drop support for Spring beans and follow these practices: If test objects are simple (String, Integers, etc.), just add to the test itself as in: JobInfo jobInfo = connector.createJob(OperationEnum.insert, "Account", "Id", ContentType.XML, ConcurrencyMode.Parallel); If test objects are complex such as Domain objects, implement a DataBuilder and use it as follows: List<Map<String, Object>> batchPayload = DataBuilder.createdBatchPayload(); batchInfo = connector.createBatch(jobInfo, batchPayload); Implementing a DataBuilder is mandatory to keep tests consistent. However, the DataBuilder can read the existent Spring beans to load already defined objects or create new ones from scratch following the build pattern . If loading existent Spring beans to build objects, a possible way is using an ApplicationContext as follows inside the data builder class: import ... public class TestDataBuilder { public TestDataBuilder(){ ApplicationContext context = new ClassPathXmlApplicationContext(automationSpringBeans.xml); } public static CustomObjectType createCustomTestData(){ CustomObjectType ret = (CustomObjectType) context.getBean("customObject"); return ret; } public void shutDownDataBuilder(){ ((ConfigurableApplicationContext)context).close(); } } @Configurable Fields Not Supported at @Connector/@Module Class Level In DevKit 3.7.n, @Configurable fields in @Connector and/or @Module classes are no longer encouraged. You should move @Configurable fields to a proper @Config. 3.6.n Connector Example The following shows how the @Connector class was coded in version 3.6.n: @Connector(name="my-connector", friendlyName="MyConnector") public class MyConnector { @Configurable String token; @Config ConnectorConfiguration config; @Processor public String myProcessor(String param) { ... } } 3.7.n Connector Example The following shows how the @C onnector class is now coded in version 3.7.n: @Connector(name="my-connector", friendlyName="MyConnector") public class MyConnector { @Config ConnectorConfiguration config; @Processor public String myProcessor(String param) { ... } } @Configuration(configElementName="config",friendlyName="Configuration") public class ConnectorConfiguration { @Configurable String token; // More Configurable Fields … } Important: If you want to share @Configurable fields between @Config classes, create an abstract class and make all your @Config classes extend that parent element that contains the shared @Configurable fields. @Inject is Not Supported at @Processor Level Mule 3.7 is compliant with the JSR-330 specification. Because of that, the @Inject annotation at @Processor level is invalid. Starting with DevKit 3.7, if the signature method has either MuleEvent or MuleMessage as a parameter, DevKit properly injects the parameter when the processor is called. *Important: * DevKit does not support the JSR-330 specification. 3.6.n Legacy @Inject Example The following shows how @Inject was used in version 3.6.n: @Inject @Processor public boolean parameterInjectionModule(MuleEvent event, MuleMessage message) throws Exception { if(event == null || message == null) { throw new RuntimeException("MuleEvent or MuleMessage cannot be null"); } return true; } 3.7.n @Processor Example With Parameter Injection The following shows how to inject a parameter in version 3.7.n: @Processor public boolean parameterInjectionModule(MuleEvent event, MuleMessage message) throws Exception { if(event == null || message == null) { throw new RuntimeException("MuleEvent or MuleMessage cannot be null"); } return true; }
https://docs.mulesoft.com/release-notes/connector/connector-migration-guide-mule-3.6-to-3.7
2020-07-02T09:08:59
CC-MAIN-2020-29
1593655878639.9
[]
docs.mulesoft.com
A user with Production rights can access the print magazines in the navigation column under Channels >>> Print. Search for and select the magazine channel you want to work with. In tab named Editorial products you can find any editorial inventory for your selected magazine. Note: in RunMags, an editorial is distinguished from an advertorial by the fact that it's not possible to get paid money for an editorial. Advertorials are treated just as advertising products through billing. The editorial tool is just a means to keep track of editorial commitments you're doing to advertisers, which can be added to a deal including paid advertising. Creating new editorial products and editing existing ones To add a new editorial product, click the green New button. To edit an editorial product, click the orange Edit button or double-click the selected item. The following form will be displayed. Form field reference - The product name and description is used in proposals and in production. - The sort field dictates the ordering in the dropdown field when you select editorials to be added to deals. - The size field is optional, but if used it can dictate how large the editorial will be in the flatplan. Click save when all the information has been entered.
https://docs.runmags.com/en/articles/104383-viewing-and-adding-editorial-products
2020-07-02T09:40:59
CC-MAIN-2020-29
1593655878639.9
[array(['https://downloads.intercomcdn.com/i/o/91518588/2dd564c9f00f79de397cb73e/Screen+Shot+on+Dec+17th+at+10_03+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91518919/48deecb6a77e79bc2d1268de/Screen+Shot+on+Dec+17th+at+10_04+AM.png', None], dtype=object) ]
docs.runmags.com
Follow these steps to uninstall your SuiteCRM instance: Navigate to the directory within your web server where SuiteCRM is located. Remove the SuiteCRM directory. In Linux, you can use rm -r <suitedirectory> if you wish to be prompted, or rm -rf <suitedirectory> if you wish to delete the directory without being prompted. Delete the SuiteCRM database schema from your server database. The default is suitecrm, but it will differ if it has been renamed during the installation process. Content is available under GNU Free Documentation License 1.3 or later unless otherwise noted.
https://docs.suitecrm.com/admin/installation-guide/uninstalling/
2020-07-02T08:32:53
CC-MAIN-2020-29
1593655878639.9
[]
docs.suitecrm.com
The Documents module can be used as a repository for Customer issued or internal files. This content can be uploaded, revised and viewed in addition to relating to individual records within SuiteCRM. You can access the Documents actions from the Documents module menu drop down or via the Sidebar. The Documents actions are as follows: Create Document – A new form is opened in Edit View to allow you to create a new Document record. View Documents – Redirects you to the List View for the Documents module. This allows you to search and list Document records. To view the full list of fields available when creating a Document, See Documents Field List. To sort records on the Documents List View, click any column title which is sortable. This will sort the column either ascending or descending. To search for a Document, see the Search section of this user guide. To update some or all the Documents on the List View, use the Mass Update panel as described in the Mass Updating Records section of this user guide. To duplicate a Document, you can click the Duplicate button on the Detail View and then save the duplicate record. To delete one or multiple Documents, you can select multiple records from the List View and click delete. You can also delete a Document from the Detail View by clicking the Delete button. For a more detailed guide on deleting records, see the Deleting Records section of this user guide. To view the details of a Document, click the Document Name in the List View. This will open the record in Detail View. To view an attachment, click the attachment link on the List View or Detail View of the Document. To update a document, you can create a Document Revision. To edit the Document details, click Edit icon within the List View or click the edit button on the Detail View, make the necessary changes, and click Save. For a detailed guide on importing and exporting Documents, see the Importing Records and Exporting Records sections of this user guide. To track all changes to audited fields, in the Document record, you can click the View Change Log button on the Document’s Detail View or Edit View. Content is available under GNU Free Documentation License 1.3 or later unless otherwise noted.
https://docs.suitecrm.com/user/core-modules/documents/
2020-07-02T09:26:15
CC-MAIN-2020-29
1593655878639.9
[]
docs.suitecrm.com
UI Guidelines Local Navigation Activity indicators and progress indicators Activity indicators and progress indicators show users that their BlackBerry® PlayBook™ tablet is performing an action, such as searching for items or connecting to a Wi-Fi® network. Users cannot interact with these indicators. If you can determine the duration of an action, use a progress indicator. If you want to show that your application is working but you cannot determine the duration of the action, use an activity indicator. Best practices - Always indicate progress when an action takes more than 2 seconds to complete. - Provide useful progress information. For example, if users are downloading an application, indicate the percentage of data that has been downloaded. Be as accurate as possible with the progress information. - For progress indicators, use concise, descriptive text to indicate what the action is (for example, "Loading data" or "Building an application list"). If an action takes a long time and you want to communicate what is happening at each stage, provide text that describes each stage (for example, "Downloading" or "Installing"). - Use sentence case capitalization. Capitalize the first word and any other word that requires capitalization (such as a proper noun). Next topic: Designing application icons Previous topic: Media controls Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/27299/Activity_indicators_and_progress_indicators_1340785_11.jsp
2014-03-07T07:55:39
CC-MAIN-2014-10
1393999636779
[array(['progress_indicator_tablet_1352976_11.jpg', 'This screen shows a progress indicator.'], dtype=object) array(['activity_indicator_tablet_1348769_11.jpg', 'This screen shows an activity indicator.'], dtype=object)]
docs.blackberry.com
Message-ID: <235722530.20971.1394178683636.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20970_1515020487.1394178683635" ------=_Part_20970_1515020487.1394178683635 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The Introduce Assertion= refactoring recommends that if a section of code is going to make an assum= ption about the current state of the program, that an explicit assertion sh= ould check those assumptions first.=20 A common usage of this refactoring is to check that each method (and pot= entially each constructor) checks its preconditions using assertions. This = is a particular form of defensive programming. Some argue that if you have = sufficient tests in place, you don't need to apply defensive programming. F= or the small extra performance penalty, it seems like a small price to pay = for extra resilience in our system.=20 Suppose we have the following method:=20 /** * Interleave two strings. * Assumes input parameters aren't null. */ def interleave(String a, String b) {> NullPointerException (somewhere within the method call)=20 If we call it with valid parameters, everything is fine. If we call it w= ith null we will receive a NullPointerException during the met= hod's execution. This can sometimes be difficult to track down. Applying this refactoring gives us the following code:=20 package introduceAssertion def interleave(String a, String b) { assert a !=3D null, 'First parameter must not be null' assert b !=3D null, 'Second parameter must not be null'> AssertionError: First parameter must not be null.=20 This is better because we become aware of any problems straight away.
http://docs.codehaus.org/exportword?pageId=78747
2014-03-07T07:51:23
CC-MAIN-2014-10
1393999636779
[]
docs.codehaus.org
Fabric’s primary operations, run and sudo, are capable of sending local input to the remote end, in a manner nearly identical to the ssh program. For example, programs which display password prompts (e.g. a database dump utility, or changing a user’s password) will behave just as if you were interacting with them directly. However, as with ssh itself, Fabric’s implementation of this feature is subject to a handful of limitations which are not always intuitive. This document discusses such issues in detail. Note Readers unfamiliar with the basics of Unix stdout and stderr pipes, and/or terminal devices, may wish to visit the Wikipedia pages for Unix pipelines and Pseudo terminals respectively. The first issue to be aware of is that of the stdout and stderr streams, and why they are separated or combined as needed. Fabric 0.9.x and earlier, and Python itself, buffer output on a line-by-line basis: text is not printed to the user until a newline character is found. This works fine in most situations but becomes problematic when one needs to deal with partial-line output such as prompts. Note Line-buffered output can make programs appear to halt or freeze for no reason, as prompts print out text without a newline, waiting for the user to enter their input and press Return. Newer Fabric versions buffer both input and output on a character-by-character basis in order to make interaction with prompts possible. This has the convenient side effect of enabling interaction with complex programs utilizing the “curses” libraries or which otherwise redraw the screen (think top). Unfortunately, printing to stderr and stdout simultaneously (as many programs do) means that when the two streams are printed independently one byte at a time, they can become garbled or meshed together. While this can sometimes be mitigated by line-buffering one of the streams and not the other, it’s still a serious issue. To solve this problem, Fabric uses a setting in our SSH layer which merges the two streams at a low level and causes output to appear more naturally. This setting is represented in Fabric as the combine_stderr env var and keyword argument, and is True by default. Due to this default setting, output will appear correctly, but at the cost of an empty .stderr attribute on the return values of run/sudo, as all output will appear to be stdout. Conversely, users requiring a distinct stderr stream at the Python level and who aren’t bothered by garbled user-facing output (or who are hiding stdout and stderr from the command in question) may opt to set this to False as needed. The other main issue to consider when presenting interactive prompts to users is that of echoing the user’s own input. Typical terminal applications or bona fide text terminals (e.g. when using a Unix system without a running GUI) present programs with a terminal device called a tty or pty (for pseudo-terminal). These automatically echo all text typed into them back out to the user (via stdout), as interaction without seeing what you had just typed would be difficult. Terminal devices are also able to conditionally turn off echoing, allowing secure password prompts. However, it’s possible for programs to be run without a tty or pty present at all (consider cron jobs, for example) and in this situation, any stdin data being fed to the program won’t be echoed. This is desirable for programs being run without any humans around, and it’s also Fabric’s old default mode of operation. Unfortunately, in the context of executing commands via Fabric, when no pty is present to echo a user’s stdin, Fabric must echo it for them. This is sufficient for many applications, but it presents problems for password prompts, which become insecure. In the interests of security and meeting the principle of least surprise (insofar as users are typically expecting things to behave as they would when run in a terminal emulator), Fabric 1.0 and greater force a pty by default. With a pty enabled, Fabric simply allows the remote end to handle echoing or hiding of stdin and does not echo anything itself. Note In addition to allowing normal echo behavior, a pty also means programs that behave differently when attached to a terminal device will then do so. For example, programs that colorize output on terminals but not when run in the background will print colored output. Be wary of this if you inspect the return value of run or sudo! For situations requiring the pty behavior turned off, the --no-pty command-line argument and always_use_pty env var may be used. As a final note, keep in mind that use of pseudo-terminals effectively implies combining stdout and stderr – in much the same way as the combine_stderr setting does. This is because a terminal device naturally sends both stdout and stderr to the same place – the user’s display – thus making it impossible to differentiate between them. However, at the Fabric level, the two groups of settings are distinct from one another and may be combined in various ways. The default is for both to be set to True; the other combinations are as follows:
http://docs.fabfile.org/en/1.3.3/usage/interactivity.html?highlight=interactive
2014-03-07T07:51:01
CC-MAIN-2014-10
1393999636779
[]
docs.fabfile.org
A scalar inputs and produces a fixed number of scalar outputs. In Numpy, universal functions are instances of the numpy.ufunc class. Many of the built-in functions are implemented in compiled C code, but ufunc instances can also be produced using the frompyfunc factory function. The output of the ufunc (and its methods) is not necessarily an ndarray, if all input arguments are not ndarrays.-bit system. You can generate this table for your system with the code given in the Figure.. sig. There are some informational attributes that universal functions possess. None of the attributes can be set. All ufuncs have four methods. However, these methods only make sense on ufuncs that take two input arguments and return one output argument. Attempting to call these methods on other ufuncs will cause a ValueError. The reduce-like methods all take an axis keyword and a dtype keyword, and the arrays must all have dimension >= 1. The axis keyword specifies the axis of the array over which the reduction will take place and may be negative, but must be an integer. The dtype keyword allows you to manage a very common problem that arises when naively using {op}. Warning A reduce-like operation on an array with a data-type that has a range “too small” to handle the result will silently wrap. One should use dtype to increase the size of the data-type over which reduction takes place. ufunc will be described as if acting on a set of scalar inputs to return a set of scalar outputs. Note The ufunc still returns its output(s) even if you use the optional output argument(s).. All trigonometric functions use radians when an angle is called for. The ratio of degrees to radians is These function all require integer arguments and they manipulate the bit-pattern of those arguments.).
http://docs.scipy.org/doc/numpy-1.5.x/reference/ufuncs.html
2014-03-07T07:50:37
CC-MAIN-2014-10
1393999636779
[]
docs.scipy.org
Use of square brackets to enclose the database name is also necessary if the name contains a dot: '.' e.g. mssql_select_db('Company.ERP'); Produces the error: Warning: mssql_select_db(): Sybase: Server message: Could not locate entry in sysdatabases for database 'Company'. No entry found with that name. Make sure that the name is entered correctly. (severity 16, procedure N/A) in mssql_select_db('[Company.ERP]'); Will select successfully
http://docs.php.net/manual/pt_BR/function.mssql-select-db.php
2014-03-07T07:52:54
CC-MAIN-2014-10
1393999636779
[]
docs.php.net
patsy is a Python package for describing statistical models (especially linear models, or models that have a linear component) and building design matrices. It is closely inspired by and compatible with the formula: What Patsy won’t do is, well, statistics — it just lets you describe models in general terms. It doesn’t know or care whether you ultimately want to do linear regression, time-series analysis, or fit a forest of decision trees, and it certainly won’t do any of those things for you —. But if you’re using a statistical package that requires you to provide a raw model matrix, then you can use Patsy to painlessly construct that model matrix; and if you’re the author of a statistics package, then I hope you’ll consider integrating Patsy as part of your front-end. Patsy’s goal is to become the standard high-level interface to describing statistical models in Python, regardless of what particular model or library is being used underneath. The current release may be downloaded from the Python Package index at Or the latest development version may be found in our Git repository: git clone git://github.com/pydata/patsy.git Installing patsy requires: If you have pip installed, then a simple pip install --upgrade patsy should get you the latest version. Otherwise, download and unpack the source distribution, and then run python setup.py install Post your suggestions and questions directly to the pydata mailing list ([email protected], gmane archive), or to our bug tracker. You could also contact Nathaniel J. Smith directly, but really the mailing list is almost always a better bet, because more people will see your query and others will be able to benefit from any answers you get. We currently know of the following projects using Patsy to provide a high-level interface to their statistical code: If you’d like your project to appear here, see our documentation for library developers!
http://patsy.readthedocs.org/en/latest/overview.html
2013-12-05T07:20:04
CC-MAIN-2013-48
1386163041301
[]
patsy.readthedocs.org
public interface WorkItem Represents one unit of work that needs to be executed. It contains all the information that it necessary to execute this unit of work as parameters, and (possibly) results related to its execution. WorkItems represent a unit of work in an abstract, high-level and implementation-independent manner. They are created by the engine whenever an external task needs to be performed. The engine will delegate the work item to the appropriate WorkItemHandler for execution. Whenever a work item is completed (or whenever the work item cannot be executed and should be aborted), the work item manager should be notified. For example, a work item could be created whenever an email needs to be sent. This work item would have a name that represents the type of work that needs to be executed (e.g. "Email") and parameters related to its execution (e.g. "From" = "[email protected]", "To" = ..., "Body" = ..., ...). Result parameters can contain results related to the execution of this work item (e.g. "Success" = true). WorkItemHandler, WorkItemManager static final int PENDING static final int ACTIVE static final int COMPLETED static final int ABORTED long getId() String getName() int getState() Object getParameter(String name) nullif the parameter cannot be found. name- the name of the parameter Map<String,Object> getParameters() Object getResult(String name) nullif the result cannot be found. name- the name of the result parameter Map<String,Object> getResults() long getProcessInstanceId()
http://docs.jboss.org/jbpm/v5.1/javadocs/org/drools/runtime/process/WorkItem.html
2013-12-05T07:09:22
CC-MAIN-2013-48
1386163041301
[]
docs.jboss.org
This page and the contents of the Help namespace cover the Wiki software used on this server: Mediawiki. Users of other wikis (e.g. DokuWiki/JD-Wiki) should make themselves familiar with the slightly different text formatting and wikitext syntax. Please read the Joomla! Editorial Style Guide and have a look at our list of Words to Watch. Wiki Manual:
http://docs.joomla.org/index.php?title=Help:Contents&diff=prev&oldid=83618
2013-12-05T07:12:10
CC-MAIN-2013-48
1386163041301
[]
docs.joomla.org
Groovy is an agile dynamic language for the JVM — bringing the power of languages like Python, Ruby and Smalltalk directly into the Java platform. As well as being a powerful language for scripting Java objects or writing test cases for Java systems, it can be used as an alternative compiler to javac to generate standard Java bytecode to be used by any Java project..
http://docs.codehaus.org/pages/viewpage.action?pageId=17993
2013-12-05T07:21:25
CC-MAIN-2013-48
1386163041301
[]
docs.codehaus.org
. Now we are able to query the model using standard Groovy. For example prints out all books. We can print out all the books with less than 240 pages with the following statement.. The braces indicate the containment relationships writers and books of the class Library. See the homepage of the Groovy EMF Builder for further details.
http://docs.codehaus.org/pages/viewpage.action?pageId=228171293
2013-12-05T07:21:43
CC-MAIN-2013-48
1386163041301
[]
docs.codehaus.org
There are two different approaches used by graphic file formats for supporting transparent image areas: simple binary transparency and alpha transparency. Simple binary transparency is supported in the GIF format; one color from the indexed color palette is marked as the transparent color. Alpha transparency is supported in the PNG format; the transparency information is stored in a separate channel, the Alpha channel. Procedure 6.1. Creating an Image with Transparent Areas (Alpha Transparency) First of all, we will use the same image as in the previous tutorials, Wilber the GIMP mascot. To save an image with alpha transparency, you must have an alpha channel. To check if the image has an alpha channel, go to the channel dialog and verify that an entry for “Alpha” exists, besides Red, Green and Blue. If this is not the case, add a new alpha channel from the layers menu; + → . The original XCF file contains background layers that you can remove. GIMP comes with standard filters that supports creating gradients; look under+ . You are only limited by your imagination. To demonstrate the capabilities of alpha transparency, a soft glow in the background around Wilber is shown. After you're done with your image, you can save it in PNG format. Figure 6.10. The Wilber image with transparency Mid-Tone Checks in the background layer represent the transparent region of the saved image while you are working on it in GIMP.
http://docs.gimp.org/en_GB/gimp-using-web-transparency.html
2013-12-05T07:20:51
CC-MAIN-2013-48
1386163041301
[]
docs.gimp.org
Ticket #48 (closed defect: duplicate) Only power up the phone in case power button was pressed > 3sec Description Currently, every small tip on the power button boots up the 2410. This is not how it is intended. Actually, we need to implement some delay. Change History Note: See TracTickets for help on using tickets. * This bug has been marked as a duplicate of 13 *
http://docs.openmoko.org/trac/ticket/48
2013-12-05T07:11:17
CC-MAIN-2013-48
1386163041301
[]
docs.openmoko.org
Web Integration Guide Local Navigation Search This Document BlackBerry Wallet MIME type detection Web developers must change the HTTP header for a web page that includes fields that the BlackBerry® Wallet can populate. Web developers must set the content-type to the BlackBerry Wallet MIME type. content-type: application/x-vnd.rim.bb.wallet The BlackBerry® Browser on the BlackBerry device detects the BlackBerry Wallet MIME type and adds the BlackBerry Wallet menu items to the BlackBerry Browser menu. The BlackBerry device user clicks the menu items to populate the fields on the web page with the data sets stored in the BlackBerry Wallet. In earlier versions of BlackBerry Wallet, web developers also had to set the charset but that is no longer needed. Previous topic: BlackBerry Wallet MIME type Was this information helpful? Send us your comments.
http://docs.blackberry.com/nl-nl/developers/deliverables/24683/BBWallet_MIME_type_detection_421562_11.jsp
2013-12-05T07:11:27
CC-MAIN-2013-48
1386163041301
[]
docs.blackberry.com
, but the assignee can be changed or simply removed. Once a review has been created on a violation, every Sonar user can see the review below the violation : Only the last comment on a review can be edited by the creator of this last comment.:.
http://docs.codehaus.org/pages/viewpage.action?pageId=231080392
2013-12-05T07:21:02
CC-MAIN-2013-48
1386163041301
[]
docs.codehaus.org
Ticket #1918 (closed task: invalid) Identify packages to go into SHR. Description Identify packages to go into SHR. Change History comment:2 Changed 5 years ago by zecke - Component changed from System Software to unknown And move to unknown until SHR has its own component. comment:3 Changed 5 years ago by aleix - Status changed from new to assigned - Owner changed from shr-owner to wurp comment:4 Changed 5 years ago by aleix comment:5 Changed 5 years ago by Ainulindale - Status changed from assigned to closed - HasPatchForReview unset - Resolution set to invalid Still left to do. Task has been migrated to on the "SHR Feed" category. Note: See TracTickets for help on using tickets. Do not end up on the kernel mailinglist.
http://docs.openmoko.org/trac/ticket/1918
2013-12-05T07:11:14
CC-MAIN-2013-48
1386163041301
[]
docs.openmoko.org
:mod:`code` --- Interpreter base classes ======================================== .. module:: code :synopsis: Facilities to implement read-eval-print loops. The ``code`` module provides facilities to implement read-eval-print loops in Python. Two classes and convenience functions are included which can be used to build applications which provide an interactive interpreter prompt. .. class:: InteractiveInterpreter([locals])``. .. class:: InteractiveConsole([locals[, filename]]) Closely emulate the behavior of the interactive Python interpreter. This class builds on :class:`InteractiveInterpreter` and adds prompting using the familiar ``sys.ps1`` and ``sys.ps2``, and input buffering. .. function:: interact([banner[, readfunc[, local]]]) Convenience function to run a read-eval-print loop. This creates a new instance of :class:`InteractiveConsole` and sets *readfunc* to be used as the :meth:`raw_input` method, if provided. If *local* is provided, it is passed to the :class:`InteractiveConsole` constructor for use as the default namespace for the interpreter loop. The :meth:`interact` method of the instance is then run with *banner* passed as the banner to use, if provided. The console object is discarded after use. .. function:: compile_command(source[, filename[, ``''``; and *symbol* is the optional grammar start symbol, which should be either ``'single'`` (the default) or ``'eval'``. Returns a code object (the same as ``compile(source, filename, symbol)``) if the command is complete and valid; ``None`` if the command is incomplete; raises :exc:`SyntaxError` if the command is complete and contains a syntax error, or raises :exc:`OverflowError` or :exc:`ValueError` if the command contains an invalid literal. .. _interpreter-objects: Interactive Interpreter Objects ------------------------------- .. method:: InteractiveInterpreter.runsource(source[, filename[, symbol]]) Compile and run some source in the interpreter. Arguments are the same as for :func:`compile_command`; the default for *filename* is ``''``, and for *symbol* is ``'single'``. One several things can happen: * The input is incorrect; :func:`compile_command` raised an exception (:exc:`SyntaxError` or :exc:`OverflowError`). A syntax traceback will be printed by calling the :meth:`showsyntaxerror` method. :meth:`runsource` returns ``False``. * The input is incomplete, and more input is required; :func:`compile_command` returned ``None``. :meth:`runsource` returns ``True``. * The input is complete; :func:`compile_command` returned a code object. The code is executed by calling the :meth:`runcode` (which also handles run-time exceptions, except for :exc:`SystemExit`). :meth:`runsource` returns ``False``. The return value can be used to decide whether to use ``sys.ps1`` or ``sys.ps2`` to prompt the next line. .. method:: InteractiveInterpreter.runcode(code) Execute a code object. When an exception occurs, :meth:`showtraceback` is called to display a traceback. All exceptions are caught except :exc:`SystemExit`, which is allowed to propagate. A note about :exc:`KeyboardInterrupt`: this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it. .. method:: InteractiveInterpreter.showsyntaxerror([filename]) Display the syntax error that just occurred. This does not display a stack trace because there isn't one for syntax errors. If *filename* is given, it is stuffed into the exception instead of the default filename provided by Python's parser, because it always uses ``'
http://docs.python.org/release/2.6.7/_sources/library/code.txt
2013-12-05T07:11:24
CC-MAIN-2013-48
1386163041301
[]
docs.python.org
Return an ndarray of the provided type that satisfies requirements. This function is useful to be sure that an array with the correct flags is returned for passing to compiled code (perhaps through ctypes). See also Notes The returned array will be guaranteed to have the listed requirements by making a copy if needed. Examples >>> x = np.arange(6).reshape(2,3) >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> y = np.require(x, dtype=np.float32, requirements=['A', 'O', 'W', 'F']) >>> y.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False
http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.require.html
2013-12-05T07:08:40
CC-MAIN-2013-48
1386163041301
[]
docs.scipy.org
VM provisioning BMC Server Automation enables IT Operators to perform basic ad-hoc actions on virtual assets in their environment (such as starting and stopping virtual machines), as well as more complex management tasks such as deploying virtual assets and controlling virtualization sprawl. BMC Server Automation provides IT Operators with essential tools for the management of the virtual environment and on-demand deployment of virtual assets. The following topics describe the management tasks you can perform for virtual environments: - Overview of virtualization support - Browsing virtual inventory - Deploying assets to the virtual infrastructure - Monitoring compliance in the virtual environment - Discovering and registering assets in a virtual environment - Managing virtualization sprawl The following table describes the use cases associated with using BMC Server Automation in a virtual environment. With BMC Server Automation, you have the ability to: Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/ServerAutomation/82/using/provisioning/vm-provisioning
2020-10-20T03:53:08
CC-MAIN-2020-45
1603107869785.9
[]
docs.bmc.com
manages the config properties for a class. This object holds a bool value for each cachedConfig property keyed by name. This map is maintained as each property is added via the add method. Defaults to: ExtObject.chain(superCfg.cachedConfigs) The class to which this instance is associated. Defaults to: cls This object holds an Ext.Config value for each config property keyed by name. This object has as its prototype object the configs of its super class. This map is maintained as each property is added via the add method. Defaults to: ExtObject.chain(superCfg.configs) This array holds the properties that need to be set on new instances. This array is populated when the first instance is passed to configure (basically when the first instance is created). The entries in initMap are iterated to find those configs needing per-instance processing. Defaults to: null This object holds a Number for each config property keyed by name. This object has as its prototype object the initMap of its super class. The value of each property has the following meaning: 0- initial value is nulland requires no processing. 1- initial value must be set on each instance. 2- initial value can be cached on the prototype by the first instance. Any null values will either never be added to this map or (if added by a base class and set to null by a derived class) will cause the entry to be 0. This map is maintained as each property is added via the add method. Defaults to: ExtObject.chain(superCfg.initMap) The super class Configurator instance or null if there is no super class. Defaults to: superCfg This object holds the default value for each config property keyed by name. This object has as its prototype object the values of its super class. This map is maintained as each property is added via the add method. Defaults to: ExtObject.chain(superCfg.values) This method adds new config properties. This is called for classes when they are declared, then for any mixins that class may define and finally for any overrides defined that target the class. config : Object The config object containing the new config properties. mixinClass : Ext.Class (optional) The mixin class if the configs are from a mixin. This method configures the given instance using the specified instanceConfig. The given instance should have been created by this object's cls. instance : Object The instance to configure. instanceConfig : Object The configuration properties to apply to instance. This method is called to update the internal state of a given config when that config is needed in a config transform (such as responsive or stateful mixins). Available since: 6.7.0 instance : Ext.Base The instance to configure. instanceConfig : Object The config for the instance. names : String[] The name(s) of the config(s) to process. Merges the values of a config object onto a base config. instance : Ext.Base baseConfig : Object config : Object clone : Boolean (optional) Defaults to: false the merged config instance : Object instanceConfig : Object options : Object This method accepts an instance config object containing a platformConfig property and merges the appropriate rules from that sub-object with the root object to create the final config object that should be used. This is method called by configure when it receives an instanceConfig containing a platformConfig property. Available since: 5.1.0 instance : Ext.Base instanceConfig : Object The instance config parameter. The new instance config object with platformConfig results applied.
https://docs.sencha.com/extjs/7.0.0/modern/Ext.Configurator.html
2020-10-20T03:59:54
CC-MAIN-2020-45
1603107869785.9
[]
docs.sencha.com
When you assign a permission to an object, you can choose whether the permission propagates down the object hierarchy. You set propagation for each permission. Propagation is not universally applied. Permissions defined for a child object always override the permissions that are propagated from parent objects. .
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-03B36057-B38C-479C-BD78-341CD83A0584.html
2020-10-20T03:16:36
CC-MAIN-2020-45
1603107869785.9
[array(['images/GUID-545D3776-D318-488E-8C47-C7ADEC60EC69-high.png', 'The inheritance of permissions in the vSphere inventory hierarchy is represented. Arrows indicate the inheritance of permissions from parent objects to child objects.'], dtype=object) ]
docs.vmware.com
Code Interface Changes For more information, see the Important CRYENGINE 5.4 Data and Code Changes article. If you are upgrading from CRYENGINE 5.3, please read this topic: Migrating from CRYENGINE 5.3 to CRYENGINE 5.4.. With this integration, we are using Substance Archive files (*.sbsar) to create Substance Graph instances and generate textures out of them. The important thing to keep in mind is that CRYENGINE implementation doesn't directly output graph outputs from a substance graph, but rather pre-configured texture outputs that are using substance graph outputs as inputs. Keep in mind the archive still needs to be built inside of the Substance Designer where the properties are then exposed to the Sandbox Editor through the archive for the modular end result. Terrain allowing for general greater flexibility. A slight overview on the integration goes as follows: In the current release, the object meshes get copied at an early stage into the heightmap mesh and become a seamless part of the terrain mesh supporting most of the heightmap properties and 3D terrain materials. Also, as a side note to the terrain blending, we have enabled auto high-pass in the 5.4 version of the Engine, thus eliminating an additional and cumbersome step to users, and which only pertained to terrain materials. Vulkan API Support With 5.4, the engine includes a beta version of the Vulkan renderer to accompany the DX12 implementation from last year. Vulkan being a cross-platform 3D graphics and compute API that enables developers to have high-performance real-time 3D graphics applications with balanced CPU/GPU usage. For the preview release, Vulkan support in CRYENGINE will be compatible for projects on PC, and aim for additional Android support beyond the 5.4 release. To enable the Vulkan renderer, you need to enter "r_driver=vk" into your .cryproject file. Keep in mind you must also run the remote shader compiler locally as the renderer is dependent on this step. Entity Components A longstanding issue within CRYENGINE is the reliance on game code to expose and manage entities within your scene. CRYENGINE's Component Entity System provides a modular and intuitive way to construct games. The technology works at both the system level and the entity level. The interaction model within the Sandbox Editor allows Developers to create a blank entity container that houses the advanced game logic wrapped into specific components. Examples of base components are: Mesh, Light, Constraints, or even Character Controllers. With these components, you can then develop prefabs to be placed by Level Designers throughout your game for standardization and exposure to Schematyc for triggering and event updating. Extended Detail Bending (Robinson Tech) Upgraded in 5.4 is the new "Extended Detail Bending" toggle - found in the vegetation shader. The history behind this feature comes out of our Robinson: The Journey development and how we needed a more accurate and physically dependent bending solution for our vegetation. This feature provides Artists with much more control over detail bending on their branches. For example, new bending allows not only small micro movement on leaves, but also whole branches of trees or chunks of grass. So Artists can have one big chunk of grass with phased animations so that movement looks natural. Aiding in the variations, we also apply a world-space mapped noise texture to simulate random breeze for variations to the movement. C# Templates We have added the following templates in C#: - C# Plugin - C# Blank - C# Rolling Ball - C# 3rd person Carrying on with our Launcher improvements, we have also expanded our selection of templates to get you up and running with CRYENGINE. Several things have been brought over from our C++ arsenal and that includes the Plugin and 3rd Person templates in a C# flavor. 5.4 will also introduce the new Sandbox Editor Plugin template, this allows for development of new Sandbox Editor functionality when using the yet to be released full Sandbox Editor source code. In future release, we expect to add more templates to CRYENGINE which not only shows the beginning part, but also finalized examples that guide you through each phase of game development. Asset System Updates In 5.3, we shipped a base implementation of the Asset System (that was long overdue in the Sandbox Editor) that allows users to directly interact with their assets and directories. We know that the most optimal solution is to never touch the Windows Explorer in order to achieve the change you want - the Asset System certainly brings us one step closer to this reality. To achieve this, we have now added direct drag-and-drop support for the Asset Browser along with other tools integration (Material Editor, Particle Editor). To aid in the assistance of changing the paths or managing dependencies, we have introduced an all new Dependency Graph with the 5.4 version of the Engine. Now within the Asset Browser, we have exposed dynamic CGF switching functionality with the added benefit of having a direct view of a thumbnail to preview and select the asset that best suits your needs in your development. That allows you to step down the list to find the asset best suited for your scenario. UPDATE: Editor Source Code (Not included in preview release) With the 5.4 full release, the Sandbox Editor source code will be provided to users so that they can extend the Sandbox Editor for their own custom tools and applications. In this release, we have taken time to provide the resources and knowledge necessary to work within the Sandbox Editor code and to develop Editor Plugins that are fully integrated with the Sandbox Editor and its most advanced systems. With the release of the Sandbox Editor source code, we have also planned to expose a new Sandbox Editor programming section within our Technical Documentation. This section would include sample plugins that can be used to access and extend the Sandbox Editor from the base we deliver. We are looking forward to contributions from the community to strengthen and supplement the tools that are offered with CRYENGINE! - CRYENGINE Team Known Issues - CRASH: Typing a numeric value into texture name field triggers assert and crash (CDeviceResourceSetDesc::UpdateResource). - DX12: Taking a screenshot freezes the game. - DX12: Vegetation and brush streaming is slow/partially missing. - CRASH: Possible crash when creating a new map and switching/changing terrain texture size several times in a row (CryRenderD3D11.dll!std::_Hash<std::_Umap_traits<void * __ptr64). - CRASH: Docking Schematyc-window to main Viewport crashes (CryRenderD3D11.dll!CCryDeviceContextWrapper::OMSetRenderTargets). - CT: Editing an *.animevents-file in the CT and saving the changes do not work. - WARNING:(TRACK) Capture Track produces warning: End capture failed. - SCHEM: Nodes are broken in LightenUp2. - SCHEM: Components of LightenUp2 are broken. - CRASH: Sandbox Editor corrupted after crash in mannequin (CFragmentSequencePlayback::Update()). - MANNEQUIN: Performance drops when moving the camera around in the Mannequin Editor. - CRASH: Creating several new levels causes crash (CryRenderD3D11.dll!CCryDeviceContextWrapper::PSSetShaderResources). - CRASH: Creating several new levels with added animations-folder causes crash (CryRenderD3D11.dll!CCryDeviceContextWrapper::PSSetShaderResources). - TRACK: Standard Events are missing from TrackView. - RENDER: Selected objects are not rendered correctly. - RENDER: (PARTICLE) Environment probes do not affect particles. - CRASH: (MULTIPLAYER) Client may crash at match end (Entity_grid_checker rwi.cpp). - CRASH: Disable a lights-PFX in Schem-Properties, then switching to main Viewport causes crash (Cry3DEngine.dll!CLightVolumesMgr::Update). - RENDERING: Ocean is missing/not rendered, skybox is too bright in Woodland. - CRASH: (LEVEL EXPLORER) Using undo and redo and deleting the main layer can cause crash (EditorCommon.dll!CBaseObject::Serialize(CObjectArchive & ar)). - SPAM: Camera movement in 3rd person template causes massive spam. - CT: (PHYSICS) Most of the rope/cloth physics properties don't have any effect on physicalized objects. - MANNEQUIN: Mannequin Editor does not hot reload files, nor does it try to load new ones on deliberate load commands. - CRASH:(PE/ PFX) Dragging PFX from a file-creation-dialogue into the map and then using cancel causes crash (Cry3DEngine.dll!pfx2::CParticleEffect::Compile()). - ERROR: Spam in NC when jumped into Game using 3rd person shooter-template (Animation-queue). - SANDBOX: 'Package Build' function is broken. - CRASH:(MANNEQUIN) Assigning an animation to a key that is not aligned with the animation next to it causes crash. - CRASH:(CT) Loading a different *.cgf while a character-animation plays occasionally causes crash (EditorCommon.dll!Explorer::EntryBase::Serialize). - CRASH:(FG) Using Undo/Redo often (after adding a node) causes crash (Sandbox.exe!CHyperGraph::GetAllEdges). - DESIGNER: PolyLine tool does not finish properly. - Substance Integration: Undo not working. - Substance Integration: It is not possible to create materials out of substances - just textures and assign them to materials manually. - Substance Integration: When a Substance Archive(s) is/are modified outside of a Sandbox Editor session, then instances that depend on the archive(s) are not recalculated. - Substance Integration: Disabling output in an instance does not lead to resulting texture removal. - Substance Integration: Even if there are no changes in a Substance Instance (after closing the Instance Editor) then high quality outputs are recalculated. - CRASH: (FG) Using shortcut "Q" to quickly add a node causes crash (CHyperGraphView::OnKeyDown). Adding manually avoids crash. Animation Animation General - Refactored: Added a function to extend a skeleton with n number of skins (in a batch). - Fixed: Character sync to saved state. - Fixed: Issue with not syncing invisible ragdolls correctly. - Tweaked: Added more profiler markers in AttachmentManager. Character Tool - New: Create serializable animation layer stacks in Animation Tool. - Fixed: LoadAndLockResources wasnt preloading VCloth attachment skins. - Tweaked: Character Tool can now play animation fx events from multiple AnimFX libraries. Mannequin - Fixed: Occasional crash while rotating camera and zooming at the same time (with mouse wheel). - Fixed: Missing floor-grid rendering in mannequin viewport. AI AI System - New: (PerceptionSystem) AI Cleanup - Move PerceptionManager to a plugin. - New: Added new navigation UpdatesManager interface and implementation. - New: (Navigation) NavMesh Improvement - Grouping 'per agent type' in menu. - Refactored: Changes in AI System interfaces. - Refactored: Navmesh updates refactoring. - Refactored: Setting navmesh raycast default version to the newest one. - Optimized: AI Cleanup - Removing old navigation system. - Optimized: AI Cleanup - Removing SelectionTree from the AISystem. - Fixed: Navmesh raycast not returning correct results in some cases (fixed in ai_MNMRaycastImplementation 2). - Fixed: Dead puppet's BTs were still ticking and potentially running TPS queries. - Fixed: Navmesh isn't updated in Physics/AI mode in the Sandbox Editor when physical entity is destructed by physics on demand system. - Fixed: Editing terrain could result in unconnected navmesh tiles. - Fixed: Updating navmesh when editing exclusion areas in the Sandbox Editor for example, Delete & Undo. - Fixed: Assets in navigation system when exporting a level with exclusion areas for the second time. - Fixed: Rare multithread related crash in WorldVoxelizer when updating navmesh. - Fixed: (Navigation) Navmesh Improvement - No re-validation after leaving game mode. - Fixed: NavigationSystem - When creating a new level, exclusion volumes associated to agents carried over from the previous level instead of being cleared. - Tweaked: (Navigation) Navmesh Improvement - No re-validation when level is loaded. UQS - New: Implement save menu action in the UQS Query Editor. - New: Implement support for elements copy in the UQS Query Editor. - Fixed: (HistoryInspector) Debug-rendering in edit-mode would sometimes no longer work - depending on how foreign editor-plugins changed the current Viewport. - Fixed: CItemListProxy_Writable - Pointers to the underlying itemlist weren't retrieved in the case where the itemlist had been already filled in a previous round trip, thus causing a crash when writing to the items. - Fixed: Memory leaks and crashes due to wrong use of unique_ptrs. - Fixed: Changed all places that were still relying on element names instead of element GUIDs. Added the concept of a default query type via UQS::Core::IUtils::GetDefaultQueryFactory for when creating a new query in the editor. - Fixed: Client-side ItemMonitors would get destroyed by the wrong deleter on the core-side, thus causing crashes in mixed debug/release builds. - Tweaked: Enable property tree undo in the UQS Query Editor. Audio Audio General - New: Added PortAudio ACE plugin. - New: Audio system using Oculus Audio SDK 1.1.2 with Wwise. - New: Added Oculus HRTF support. - New: Substituted CVar "s_PositionUpdateThreshold" with "s_PositionUpdateThresholdMultiplier". An audio object's distance to the listener is multiplied by this value to now determine the position update threshold. - New: Added ability to set an audio parameter on a Mannequin audio clip. - New: Added SDL_mixer and dependent libraries to the compilation of the Engine, this is to enable us to conveniently build SDL_mixer for any supported platform ourselves. - New: Pooled audio objects and events in the middleware. - New: Pooled audio objects. - New: Missing dropdown options from the audio trigger spot. - Refactored: Moved all of the audio data into its own namespace namely "CryAudio" - adjusted include paths in audio interfaces to have proper formatting. Renamed the IAudioSystem interface to ISystem after it's been embedded into the CryAudio namespace. - Refactored: Removed external PushRequest method from the AudioSystem interface and with that all external PushRequest types. Introduced task specific methods to the IAudioObject interface to substitute object specific external PushRequests and corresponding methods to the IAudioSystem interface for system global tasks. This frees us from copying external request data into internal structures. Introduced ExecuteTriggerEx to the IAudioSystem interface as a sort of convenience-fire-and-forget-type method for instant audio events that need an audio object where the user doesn't require explicit handling of such an object. This additionally combines tasks which were single requests before. Changed how listeners register to PushRequests, this is now handled more conveniently and currently available events are available through the EAudioSystemEvents enum. Removed the AudioProxy class, it is now substituted directly by the already existing AudioObjects. Data duplication and needle's cycle burning are drastically minimized. Renamed the CryAudio::Impl::IAudioImpl::NewAudioListener method to CryAudio::Impl::IAudioImpl::ConstructAudioListener. Renamed the CryAudio::Impl::IAudioImpl::DeleteAudioListener method to CryAudio::Impl::IAudioImpl::DestructAudioListener. Removed the "NewDefaultAudioListener" method from the CryAudio::Impl::IAudioImpl struct as it became obsolete. Listeners are simply constructed through the already existing ConstructAudioListener method. Renamed the IAudioSystem::GetAudioRtpcId method to IAudioSystem::GetAudioParameterId. Documented and "doxygened" the IAudioSystem interface. Updated documentation of the IAudioImpl interface. Adjusted client code according to these changes. - Refactored: Removed memory allocators for the audio implementations, now they use the module heap. - Refactored: Direct calls to the audio entities and objects. - Refactored: Unified the name property of audio controls. - Refactored: Removed use of allocator from null implementation. - Refactored: Using pool objects for standalone files. - Refactored: Removed id lookup for standalone files. - Optimized: Removed blocking request when creating audio objects on demand. - Optimized: Replaced the audio system request queue with an MPSC queue. - Optimized: Removed unnecessary variable. - Optimized: Removed audio event id. - Optimized: Pooled audio events. - Optimized: Removed the debug-name-store functionality, storing debug data on the objects now. - Fixed: Fixed an assert where debug display tried to access the adaptive Occlusion value before it was determined. - Fixed: Crash when altering values of AudioSpot used in SCHEM-entity - jump into game and an exit causes a crash. - Fixed: Trying to execute an audio trigger at (0,0,0)...' - warning when playing a sound on proc layer in Mannequin preview. - Fixed: Crash when audio triggers were played via the global-audio-object (Sidewinder Template). - Fixed: Play Mode 'Delay' is not working properly on ATS. - Fixed: Crash/bug where the audio object was already released, when the sync-callback was executed in the next External Update. - Fixed: Occlusion calculation did not disable beyond activity radius. - Fixed: Fixed compilation on durango. - Fixed: Removed audio specific "using namespace" directives from entire Engine code files to prevent data collisions with objects in global namespace and prevent bleeding namespace directives in no-uber type builds. - Fixed: Do not clear an audio object's name when releasing an implementation. - Fixed: Compile error where a PortAudio impl specific enum was declared in the wrong namespace. - Fixed: Missing include. - Fixed: Revived the "sounds on ropes" feature. - Fixed: Global audio object implementations are updated again. - Fixed: Crash when shutting down and getting event finished callbacks in Wwise. - Fixed: Updated Wwise SDK to v2016.2.1 build 5995. - Fixed: Bug in CMake setup when including the Oculus spatializer dll. - Fixed: Move middleware specific audio editors to their own folder within EditorCommon. - Fixed: FMOD dlls now properly added to the build. - Fixed: Crash when reloading audio. - Fixed: The ACE is displaying ".cryasset" files. - Fixed: Crash in CEntityComponentAudio when running with NULL AudioSystem. - Fixed: Audio objects too close to the listener now automatically disable occlusion calculations, "too close" can be defined via the newly introduced CVar "s_OcclusionMinDistance", current default value is 10 cm. - Fixed: A local variable of name "id" clashed with a member variable also named "id". - Fixed: Only add switch/state connection if the middleware successfully created it. - Fixed: Fixed compile error. - Fixed: Switches, RTPC's and environments properly set when changed. - Fixed: FMOD Studio events did not assume obstruction and occlusion values on start. - Fixed: Missing audio object names. - Fixed: Quitting game launcher crashes (CryAudioSystem.dll!CryAudio::CSystem::PushRequest). - Tweaked: Updated Wwise SDK to v2016.2.3 build 6077. - Tweaked: Updated FMOD Studio API to version 1.09.04. - Tweaked: In PortAudio impl swapped hard coded file path length of 512 with audio global MaxFilePathLength. - Tweaked: Renamed EPortAudioEventType to EEventType. - Tweaked: Renamed TAudioGamepadUniqueID typedef to AudioGamepadUniqueId to conform with naming convention. - Tweaked: Renamed global variable g_audioLogger to g_logger and implementation specific global loggers to g_implLogger. - Tweaked: Renamed CAudioLogger to CLogger and put it into the CryAudio namespace. - Tweaked: Adjusted remaining unscoped enums to now be scoped. - Tweaked: Changed all audio specific unscoped enums to scoped enums. - Tweaked: Audio object can now be renamed at runtime. - Tweaked: Separated IListener and IObject from IAudioSystem into their own files, also introduced IProfileData interface to get access to audio system internal profile data. - Tweaked: Prevent putting requests in callback queue if there's no listener associated with the request. - Tweaked: Renamed the following default controls - object_doppler_tracking to relative_velocity_tracking, object_velocity_tracking to absolute_velocity_tracking, object_doppler to relative_velocity and object_speed to absolute_velocity. - Tweaked: Default controls now get hashed at compile time. - Tweaked: Removed some not needed code. - Tweaked: Increased to updated frequency for abs and rel velocity parameters of audio objects from 10 to 100Hz. - Tweaked: Updated FMOD Studio API to version 1.09.01. - Tweaked: Minor coding guideline adjustments in the SDL_mixer implementation. - Tweaked: When the audio thread is sleeping it is woken up if a new request is pushed in from another thread. - Tweaked: Remove unnecessary variable from Wwise audio object. ACE (Audio Controls Editor) - Fixed: When drag & dropping Wwise switch/state groups, the corresponding audio system switch states keep their name within their parent switch - used to create unique state names during that operation which was wrong. - Fixed: Generation of unique folder names when drag & dropping from the middleware pane. - Fixed: Crash when adding the very first state to a switch, also enabled adding more than one state. - Fixed: Crash when removing preload connections. - Fixed: Numerous usability issues with the ACE. - Fixed: Renaming of environments after ACE refactor. - Fixed: First generation bugs after ACE refactoring - ACE refactoring to use the Asset Manager model. DRS (Dynamic Response System) - New: Added an option to swap variables. - New: Added warning if CopyVariable fails. - New: Added serialization for cool-down variables. - New: Added CopyVariable action. - New: Added an option to execute a DRS signal on the selected entity (and to create the drs-actor if missing). - New: Added Import/export to/from .TSV. - Refactored: How "local" variable collections are created/managed. - Refactored: Initialize the DRS automatically on startup - based on the new CVar drs_dataPath. - Refactored: Removed no longer needed CryDrsCommonElements extension. - Refactored: Removed exoctic SpeakLineBasedOnVariable action from GameSDK. - Fixed: Constantly-updating variables were not editable. - Fixed: (DRS) Wrong entries in selection dropdowns from Responses. - Fixed: A shutdown crash that happens when the game registered custom actions/conditions to the DRS. - Fixed: Typos in error message. - Fixed: TimeSinceResponse Condition now also checks the last "StartTime" of a response - in case an instance of the response is currently running. - Fixed: Crash when adding variables via UI. - Fixed: Crash when an entity playing a sound via a drs-action is deleted. - Fixed: No-uber. - Fixed: (DRS) Sending signal crashes. - Fixed: Assert about line not running when cancelled - only occurred if the same line is (re)started and has finished in the same frame. - Fixed: Saving of DialogLineDatabase was not working with game templates. - Fixed: Creation of responses-folder did not work. - Fixed: Startup crash when using the NullImpl. - Tweaked: Wait-Action now has a min and a max Time property - to allow random wait times. - Tweaked: Small code cleanups based on Visual Assist comments (mainly removed default constructors and added override keyword to destructors). - Tweaked: Sorted Actions & Conditions alphabetically. - Tweaked: Improved code quality based on Visual Assist Code Analysis. - Tweaked: Updated the drs-audio actions. - Tweaked: Made the autosave feature of the dialog line database optional. Core/System Engine General - New: EntitySystem component refactor. - New: Unified Entity and Schematyc components. - New: Added support for changing entity component masks at runtime, allowing for toggling update and more. - New: Added IEntityArchetypeManagerExtension interface and support for it. - New: (Yasli) Added new compile time checks to serialization interface. - New: Automatically unregister CVars when a plugin is shutting down. - Refactored: IUILayout interface. - Refactored: (Flowgraph) Added an Interface for communication with the game specific precache system. - Refactored: Refactor properties by merging property handler and attributes into IEntityPropertyGroup that can be implemented once per entity component. - Refactored: Removed the requirement for release builds to use signed paks. - Refactored: Removed string-based-set-or-create game token method. - Refactored: Added a 3rdParty folder to Code/CryEngine/CryCommon. Allow users to include 3rdParty header libraries via #include <3rdParty/abc/x.h> or <abc/x.h> anywhere in code, for e.g. CryCommon header files. - Refactored: (EAAS) Remove IS_EAAS from build. - Refactored: SGUID using CryGUID - Optimized: Rendering of the AreaManager's area grid. Also added CVar "es_DrawAreaGridCells" to draw cell specific data such as number and coordinates. - Fixed: Some cases of Engine assets not being loaded. - Fixed: Remove extra call to Detail::CStaticAutoRegistrar<>::InvokeStaticCallbacks causing default entity components to be registered twice in monolithic builds. - Fixed: Area solids calculated wrong bounding box. - Fixed: Reverted change to inrange. Fixed potential alignment issues by changing some pointer casts from Matrix44* to Matrix44f*. - Fixed: Lua scripts not being updated. - Fixed: Issue where a fatal error would be triggered when dynamically creating area components. - Fixed: Case where game rules and local player id could be reserved too late. - Fixed: Placing an Audio Area Random causes all entities to disappear after saving and reloading. - Fixed: Compile time CRC computation constexpr wasn't compiled as such. - Fixed: Paste a file in windows while GameLauncher.exe is running does not work. - Fixed: Nullptr rendernode check in breakability. - Fixed: Values are not set correctly on any EntityComponent that's not the first component. - Fixed: Crash caused due to simultaneous access to an area grid member from different threads. - Fixed: Added in-place construction/destruction for non-trivial/POD custom types "T" in the concurrent queues. - Fixed: Bug where it was not possible to remove a component and directly afterwards add the same component again. - Fixed: Problem when audio thread queried the areaSystem during level loading. - Fixed: Duplicating component based objects in the Sandbox Editor doesn't work properly. - Fixed: Entity activation requests lost during the active entity updates. - Fixed: (CryAction) Allow Designers to target the Rumble to a particular controller. - Fixed: Detect HTML tag as end of localization key. - Fixed: A leak when showing additional geometry in TextFields for e.g. borders, cursor etc. - Fixed: Multiple definition of friend in class GCC 4.9.3. - Fixed: (TPS) Bug that caused data not to be loaded correctly. - Fixed: Problems related to 'release' configuration. - Fixed: Places where "engine" was missing for asset located in the engine.pak. - Fixed: Wrong CPUID bit being tested for SSE3. - Fixed: fixed where the "InnerFadeDistance" property wasn't properly ex-/imported for AreaBox, AreaSphere and AreaShape. - Tweaked: Implement component unit tests. - Tweaked: CryFixedArray compile error when tracing array subscript out of range usage. - Tweaked: Define unreachable code marker. - Tweaked: Update .gitignore to exclude CMake solution directory. - Tweaked: Add es_DebugEntityUsageSortMode command to sort existing es_DebugEntityUsage output. - Fixed: (Yasli) Couldn't attach errors/warnings while serializing Enums due to missing TypeID on the enum type. Common - New: ActionButton can now accept lambdas with captures - added data oriented polymorphic type handling. Names make no sense at the moment- added a reset button to reset an attribute to its default value. - Refactored: Move automated CVar and Command unregistration into its own class. - Refactored: Remove need for Release function in entity components. - Refactored: Remove implicit declaration of constructors and destructors when using ICryUnknown. - Refactored: Allow creation of enum serialization spec for already declared enums. - Optimized: Move vector for smart_ptr - should be no exception for the vector to use move semantics. - Fixed: Template issue in deciding what are valid strings for CryPath to operate on. Fixes issues with FixedStrings and wchar types. - Fixed: Potential infinite loop on startup. - Fixed: Fix Static Linking (OPTION_STATIC_LINKING) by using /WHOLEARCHIVE Visual Studio 2015 linker option - to prevent the linker optimizing away static factory registrations. - Tweaked: Define YASLI_CXX11 always. System - New: INumberVector base class to vectors and matrices - consolidates many common functions. Added a relative IsEquivalent option. Added proper Invert and Determinant to Matrix44, normalizing and projection functions to Vectors. Updated IsValid and SetInvalid to use C++ 11 functions. - New: Automatically create .cryproject file for legacy workflow projects. - New: Replace game.cfg with new console variables section in .cryproject file - existing data is automatically migrated. - New: Moved cryplugin.csv contents into .cryproject file - automatically migrate data to the new or existing project file. - New: Added support for statically linking plugins. - New: Developer Console can be disabled in release builds (compile time switch). - New: More infomation is logged when assert is triggered in non-release builds. - New: (GamePause) Add Node Game:PauseGameUpdate to allow Designers to enable/disable game update. - Refactored: Update IArchiveHost interface to support forceVersion on all calls and helper functions. - Refactored: Introduce sys_archive_host_xml_version CVar to select CryXmlVersion in the ArchiveHost. - Refactored: Re-arrange Engine initialization to make the log and console available earlier for the project system. - Refactored: Project manager to load using Yasli - removed the need for jsmn. - Refactored: Refactored IEngineModule to never check by string. - Refactored: Simplify plugin shutdown. - Refactored: Remove CSystem::SetAffinity as it should be controlled via .thread_config. - Refactored: Renamed CDebugAllowFileAccess to CScopedAllowFileAccessFromThisThread and added SCOPED_ALLOW_FILE_ACCESS_FROM_THIS_THREA?D macro. - Refactored: Added better debugging capabilities to CListenerSet:. - Refactored: CFunctorsList uses parameter pack. - Refactored: Remove sys_plugin_reload command until it can be implemented correctly. - Optimized: Replaced Cry_XOptimise and renderer SSE code with standard Vec4 code. Merged Matrix44H and Matrix44CT definitions. - Optimized: New versions of vectors and matrices based on SIMD types: Vector4H, PlaneH, Matrix34H, Matrix44H. #CRY_HARDWARE_VECTOR4 substitutes these for non-SIMD versions. Vec4f etc. are typedefs for _tpl versions when unaligned access is needed. Added unit tests to perform timings and validation on H classes. Perform QuadraticTest on f64s to better test algorithm. - New: SIMD_traits for scalar/vector type relationships. Horizontal min, max and add functions. Template specializations for specific shuffle configurations. convert<T*> to load from memory into SIMD. Changed 2-argument shuffle<> to mix0011<> and mix0101<> to indicate what they actually do. - Fixed: Make sure all static geometry is included when saving level statistics. - Fixed: Set dev-mode flag earlier to allow modification of cheat CVars from config files in the Launcher. - Fixed: Unregister CVars before their owners are destructed. Removed IAutoCleanup. - Fixed: Do not try to load plugins that do not exist on disk. - Fixed: Updated path for default font xml. - Fixed: Global IsEquivalent was not specialized with correct epsilon for Vec3. - Fixed: Remove debug #pragma message from happening during compilation. - Fixed: AISystem now gets initialized after FlowGraphSystem (otherwise it won't be able to register FG nodes). - Fixed: CVars marked as cheat are visible in the console in release builds. - Fixed: Assert pop-up window doesn't display reason message of previous assert when no message is specified. - Fixed: Assert when attaching remote console. - Fixed: Engine root being added as a pak mod - resulted in files in Engine root being constantly being searched for - causing slowdowns and unrelated files added to file browsers. Additionally added %EDITOR% pak alias to avoid future issues with loading Editor assets. - Fixed: (CrashRpt) Missing crash dumps on Windows 7/8 - improves reported callstack for CryFatalError. - Fixed: (CryMovie) Debugbreak in LightAnimaitonSets. - Tweaked: Move legacy game folder functionality to the project system. - Tweaked: (XboxOne) Update to Durango XDK March 2017. - Tweaked: (Orbis) Update to SDK 4.008.131. - Tweaked: Add support for using relative paths with the -project command line argument. - Tweaked: Automated calling of ModuleInitISystem. - Tweaked: Removed need to specify class name when loading plugins through code - instead automatically select first ICryPlugin implementation in the module. - Tweaked: (Orbis) Log assert messages via SCE API, this way assert from non-main-threads become visible immediately. Also, implement sys_asserts=3 mode (debug-break). - Tweaked: Use std::numeric_limits::epsilon instead of a self-written approach to checkGreaterZero. WAF - Fixed: ScaleformHelper usage by WAF (all configs) and CMake (release config). - Tweaked: Don't monolithically link MonoBridge into release configs - it's an optional feature. - Tweaked: Copy the right portaudio binaries (performance and release config). CMake - New: Allow building PakEncrypt with CMake. - New: Added option to enable/disable building the Engine itself. - New: Allow building CryScaleformHelper with CMake. - New: Allow building Shader Cache Generator with CMake. - Refactored: Removed obsolete MSVC toolchain to avoid confusion. - Refactored: Removed MS-customised CMake (it's not new enough). - Optimized: Don't copy unnecessary debug DLLs or PDBs to bin/win_x64. - Optimized: Only copy .pdb and .dll files to bin/* when they are required by build options. - Fixed: CMake configuration fixes. - Fixed: Make GCC use LLVM std. - Fixed: CMake generation of Durango Solutions. - Fixed: Set plugin options before parsing CMakeLists.txt files. - Fixed: Sandbox Editor compilation with Visual Studio 2017. - Fixed: GameTemplate solution contains an invalid path as a debug command. - Fixed: Build without CrySchematyc. - Fixed: Visual Studio 2017 compilation fixes. - Fixed: ShaderCacheGen needs png16 as it depends on the renderer. - Fixed: Missing Schematyc modules in a clean CMake solution. - Fixed: Optional libraries were not getting included in the final build if using their default values. - Fixed: Fix Static Linking (OPTION_STATIC_LINKING) by using /WHOLE ARCHIVE Visual Studio 2015 linker option, to prevent linker optimizing away static factory registrations. - Fixed: Speeding up MSVC compilation with IncredibBuild and with uber files enabled. - Tweaked: Added option to allow unsigned PAK files for release builds. - Tweaked: Don't deploy Mono files on every compilation - this will be handled by the build system. Action General - New: Added Stats:TextureStreaming flow node. - Refactored: CGameObject aspects serialization is redirected to CNetEntity. - Refactored: IGameObjectExtension::SetAuthority has been converted to ENTITY_EVENT_SET_AUTHORITY. * Example client->server and server->clients RMI messages. * PlayerInput sends keystrokes over the network through aspects. * Server-authoritative player movement. * Multiple clients can connect to the listen server. - Refactored: Decouple connection events from game rules. - Fixed: Proper entity component declarations for CAnimatedCharacterComponent. - Fixed: Broken vehicles do not go through their states of damage properly. - Fixed: Starting GameLauncher triggers vehicle LUA-errors in console. - Fixed: Freeze during shutdown. - Fixed: "CEntityObject::Reload" reported an error because the saved "GameObject entityComponent" could not be loaded from xml. - Fixed: Shutdown crash. - Fixed: std::vector returned from Interface. - Fixed: Broken SpawnSerializer for non-default GameObject extensions. - Fixed: Abrams tank not displayed in first person view mode. - Fixed: CFlowplayer::LocalPlayerId - Temporarily brakes Rolling Ball template. In Sandbox Editor, on Ctrl-G, the ball spawn location is incorrect. Flowgraph - New: Added right-click option on nodes to (un)assign the selected Prefab. - New: Right-click on a Prefab Instance in the Viewport to get its usage in Flowgraph. - New: Double-click on a Prefab:Instance node to select the prefab instance in the Viewport. - New: Exposed script physics params to Flowgraph (in a better way than before). - New: Extract Flowgraph from CryAction to a plugin. - New: Added a Math:Wrap node that wraps a value into a given interval. - New: Added a "Red" output to the Camera:GetTransform node. - New: New flownodes: "GameEntity:Containers:Add/Remove Entity", "Merge", "Clear", "Actions", "QueryContainerSize", "QueryRandomEntity", "QueryContainerId", "QueryIsInContainer", "QueryEntityByIndex", "Listener", "Filters". - New: Entity Containers for Flowgraph - a system to manage and operate on sets of entities. - New: New node Entity:GetChildrenInfo for getting the ids and number of children of an entity. - New: Entity:CheckDistance to work with world position with linked entities. - New: Entity:BeamEntity now supports Parent coordinate system as well as World. Removed unused Memo port. - New: Added Timer Type (in flow node time) - Timer allows the selection of the Engine timer type to use (i.e. default or UI timer). This is useful for e.g when using this node in UI_Actions Flowgraph even when game update is paused. - New: Extending Time:RealTime FG Node to add day, month and year. - New: Added a node to select flags to combine from a drop down list. - New: Added inputs to Physics:RayCast and PhysicsRayCastCamera to allow Designers to enter their collision flags. - New: Added some outputs to Physics:RayCast so that it matches the output of Physics:RayCastCamera. - New: Added a new node for converting a physics part ID to a joint ID/name. - New: Added FG node "AI:BehaviorTree". - New: Added a Math:Wrap node that wraps a value into a given interval. - New: Added an input to the interpol-nodes - allows to change the transition-function which is applied to the result. - New: Added an Interpol:Easing node that applies an easing-function to a given percentage in a range between 0 and 1. - New: Added new flownodes "Math:AutoNoise1D" and "Math:Auto3DNoise". - New: New FG node Engine:ShadowCacheParams for changing shadow cascade parameters. - New: Added flow node "Engine:OcclusionAreaSwitch" to toggle occlusion areas within an area. - Refactored: CHyperGraphDialog interface - cleanup unused functions and component interactions. - Refactored: Fix inconsistent smart pointer usage and dangling pointer crash. - Refactored: Rename one of the CFlowGraph classes to CHyperFlowGraph to avoid confusion. - Refactored: Add get ports to FG nodes 'Entity:Velocity' and 'Physics:Dynamics'. - Optimized: Block unnecessary UI updates during level unload for the full window and not just the tree list. - Optimized: Minimize performance impact when rebuilding prefab instance FGs by about 1/5th of the time. - Optimized: Do not rebuild all Prefab Instance graphs when using the quicksearch node. - Optimized: Change collision FGN's RemoveAll to only remove listeners registered through this node. - Optimized: Removed Game:Stop node - as it does not serve any real purpose. - Fixed: Crash on level load. - Fixed: Crash on game shutdown in CFlowNode_RaycastCamera. - Fixed: Changed the param of a material on an entity that was preventing changing the entity's material. - Fixed: PlayMannequinFragment should not queue a fragment if the id was incorrect. - Fixed: Bug in EntityFaceAt node that caused totally wrong world matrix to be applied to objects. - Fixed: Assert when asking the local rotation of a parented entity through FG. - Fixed: Vis area system now respects disabled occlusion areas. - Fixed: FG debug not working. - Fixed: Using CTRL+S in the FG window does not save the level. - Fixed: Entity limit when selecting 'Add Selected Entity'. - Fixed: 'Add Comment Box/Black Box' not showing when right clicking on the selection. - Fixed: The category view mask is no longer saved in the Windows registry, but via the personalization manager. - Fixed: Graph naming and the title display. - Fixed: Cases where deleting graphs would leave traces in the UI. - Fixed: Exiting vr_demo triggers assert. - Fixed: Newly created graphs not flowing without undo/redo. - Fixed: Can't add a Flowgraph Entity as a node. - Fixed: IFlowNode::SActivationInfo stores a pointer into std::vector which is invalided when the vector is resized. - Fixed: Crash on shutdown. - Fixed: Prefab Instance nodes sometimes firing events with the previous assigned instance. - Fixed: Right-click "Find in Flowgraph" would not gain focus. - Fixed: Entity Containers and AutoNoiseNodes after the Flowgraph extraction. - Fixed: Sandbox Editor might crash on level load when DataBase View was modified before. - Fixed: BlackBox comment type discards Flowgraph nodes in Game Launcher. - Fixed: Using EntityChangeMaterial node causes affected Entity to not revert back to it's default material when jumping out of game mode. - Fixed: Unhide GeomEntity does not work. - Fixed: Added missing descriptions to some nodes and make Get methods const. - Fixed: Tweaks and fixes for the properties window. - Fixed: Tweaks and fixes for the nodes/components window. - Fixed: Entity:Attachment Node - Allow bone names and character slots to be set after game start. - Fixed: Flow nodes which can't handle linked entities. - Fixed: 'Physics:RayCastCamera' - Remove offset in raycast start and always exclude the player from the hit results. - Fixed: Possible crash if tokens contain different types. - Tweaked: Set a debug name for the graphs for more useful warnings, debugging and profiling. - Tweaked: Improve description of GameToken FlowNode port. - Tweaked: Added node IDs to GameToken error messages. - Tweaked: Fix the Shadow-refresh node (the previous code-path did not perform the necessary refresh). - Tweaked: Added a node "Environment:RefreshStaticShadows" to force-static shadow refresh on changing settings/fast-travel/menu transitions etc. - Tweaked: Added a temporary workaround string for Formats. - Tweaked: Added 'Get' port to 'Entity:ParentId' node. - Tweaked: Added comment to Entity:GetPos about scale coordinate system. - Tweaked: In fg_debugmodules - render instance number first, then the name so it doesn't get trampled with long module names. - Tweaked: Add r_shadowscache CVar to the exposed parameters in Shadow FG node. Game - Optimized: Whizby and ricochet audio objects don't need occlusion to be enabled. - Fixed: Unnatural gap between tornado's main part and top part. - Fixed: For locked head rotation in multiplayer. - Fixed: Abrams turret playing sound after overheating. - Fixed: Audio trigger 'Stop_w_tank_machinegun_fire' spammed after firing the Abrams machine gun. - Fixed: Assertion failure while idling in the Sandbox Editor and holding the grenade (SaltBufferArray.h: rSalt > oldSalt). Schematyc - New: Added Impulse and AngularImpulse to the physics component. - New: Display error when sender GUID is not passed with component/action signal. - New: Added a CrySystemsComponent to allow communication with other systems (currently game tokens, but trackview or FG could be other options in the future). - New: Simplify component registration. - New: Update action instance. - New: Implement compile-time reflection tests on all compilers. - New: Added author and description to env packages. - New: Create experimental action node. - New: Implement signal 'sender' filtering system. - New: Added Clamp nodes for vec2,3 and floats. - New: Added basic DRS component. - New: Added flags for the move-function, but to only set specific components of the movement vector. - Refactored: Unify runtime params with runtime graph node instance params: Where possible replace CRuntimeParams with CRuntimeParamMap. - Refactored: Refactor geom component to use new registration method. - Refactored: Refactor components to use new registration method. - Refactored: Deprecate custom component properties. - Refactored: Simplify IEnvComponent interface. - Refactored: Clean up reflection system. - Refactored: Remove author and wikilink from env elements. - Refactored: Simplify IEnvDataType interface. - Refactored: Simplify IEnvSignal interface. - Refactored: Clean up IScriptElement interface. - Refactored: Unify property setup to use new entity component properties. - Fixed: Range checks were missing when setting FOV from property. - Fixed: Input::Action-Signals are not called and expose different interfaces. - Fixed: QuatToString - More parameters than specified by the format string. - Fixed: Reverted a change to PropertiesWidget to handle object deletion since it crashed. - Fixed: SetVisible/IsVisible of the GeomComponent was not working. - Fixed: Non-working signal-receiver - component-bound-signals were sent by the object and not though the component. - Fixed: LineStarted/Ended Signal was not sent correctly. - Fixed: Compilation of input component. - Fixed: Copy/Paste Offset. - Fixed: Connections were deleted when clicking on parameter name. - Tweaked: Some of the input-key names. Templates - New: Added Third Person Shooter template to the C# game templates. - New: The C++ RollingBall template is now network-enabled. - Refactored: CGameObject deprecation - Converted RollingBall template to use IEntityComponents + CEntity instead of CGameObject for the player (ball). - Fixed: RollingBall template is displayed as a Blank template in the Sandbox Editor. - Fixed: Update the ball look direction for remote players. - Fixed: Restore the network sync feature in the RollingBall template. ProfVis - New: Added pan left and right - once zoomed by holding the middle mouse button. - New: Clicking on a function (in the treeview or datagrid view) will show everything running at the same time on all threads. - New: Added filter all functions above a certain threshold. - New: Added filter name and args separately. - New: Added number of calls if the function is called multiple times in the scope. - New: Call Graph on the right (highlighting the problematic parts). - Fixed: Server crash of ProfVis if sys_bp_frames_threshold != 0. - Fixed: Improved BootProfiler multithread safety. - Tweaked: Added sys_bp_frames_worker_thread, which allows to have the ProfVis running in "Gather frames under certain threshold" mode and without affecting the main thread (It uses a worker thread to dump the sessions). Auto Testing - Refactored: Refine test log error output, fall back value output as byte dump and unsigned output to provide both decimal and hexadecimal. - Refactored: Move interprocess communication (StatsAgent) to CrySystem for supporting non-GameSDK games to improve windows launcher. - Refactored: Integrate unit test target, cleanup unit test system, auto open report for failed tests when using excel reporter and to simplify excel report rules. - Fixed: Communication pipe - improved code spelling, formatting, quality and header inclusion dependencies. - Tweaked: Encourage value comparisons in tests - instead of testing true/false. - Tweaked: Fix timedemo shutdown crash by preventing cross module allocation. - Tweaked: Code modernization, misspelling fixes, leak fixes. - Tweaked: Added command line option for opening report locally when test fails. - Tweaked: Added guard condition to fix error for non-engine builds - caused by unit test target. C# - New: Introduce support for Xamarin Studio. - New: Added install file for the Xamarin Studio CryEngine Addin, which allows debugging C# projects in Xamarin Studio. - New: Add support for C# plugins not implementing ICryEnginePlugin, for example to provide a pure component collection. - New: Introduce basic support for compiling *.cs files from the asset directory at startup. - New: Introduce new Third Person Shooter template. - New: Managed Audio wrapper. - New: Managed wrapper for Animation and Character classes. - New: CVars and Console Commands can now be created from C#. - Refactored: Wrap mono API in new MonoInternals namespace. - Refactored: Replace singleton interface based API with direct access of implementation. - Refactored: Remove IListener from swig and implemented it natively in the managed Audio wrapper. - Refactored: Wrap math classes to avoid excessive virtual calls and marshaling. - Refactored: EntityClassAttribute has been deprecated and functionality was moved to the EntityComponentAttribute. - Refactored: The EntityComponent now has a default icon in the Sandbox. - Fixed: Conversion of managed strings to native causing memory leaks. - Fixed: Plugin dll's being locked after loading, preventing rebuild for reload. - Fixed: Crash after reloading assemblies multiple times. - Fixed: Improper shutdown of domains and failure to unpin objects after serialization. - Fixed: Serialization of read-only fields, added unit test. - Fixed: Add the AllowDuplicate functionality to the C# components. - Fixed: EntityComponent.OnUpdate not being called every frame. - Fixed: Textures never being removed causing a memory leak. - Tweaked: Remove flow node systems from C# CryEngine.Core assembly. - Tweaked: Expose EntityComponent.OnGameplayStart, OnRemove and OnEditorUpdate. - Tweaked: Updated Swig to version 3.0.12. - Tweaked: Cache entity callbacks to avoid querying at runtime. C# Backend - Fixed: Prevent compiling every C# file on the disk when asset directory cannot be found. - Tweaked: Improve devirtualization inside own module. - Tweaked: Expose IMonoMethod interface in order to clarify API - now allows for querying methods independently of invoking them as a future optimization. Graphics and Rendering Renderer General - New: Added Vulkan support. - New: Ported snow rendering passes to new graphics pipeline. - New: Ported rain rendering passes to new graphics pipeline. - New: Added CRainStage and CSnowStage for new graphics pipeline. - New: Ported CHUDSilhouette to new graphics pipeline. - New: Ported CHUD3D to new graphics pipeline. - New: Added Color and RGBA conversion to silhouette param to HUDUtils. - New: Implemented ApplyForceToEnvironment for grid wind. - New: Implemented shader extensions files support from GameShaders folder. - New: (Brush.cpp) Added application of IRenderNode silhouette param during render step in CBrush::Render(const struct SRendParams& _EntDrawParams, const SRenderingPassInfo& passInfo) to make use of new SExposedPublicRenderParams from IRenderNode.h. - New: (CharacterRenderNode.cpp) Added application of IRenderNode silhouette param during render step in void CCharacterRenderNode::Render(const SRendParams& inputRendParams, const SRenderingPassInfo& passInfo) to make use of new SExposedPublicRenderParams from IRenderNode.h. - New: (IRenderNode.h) Added tracking of silhouette param value. - New: Ported CFilterSharpening, CFilterBlurring, CUberGamePostProcess, CFlashBang, CFilterKillCamera, CScreenBlood, and CPostStereo to new graphics pipeline. - New: Ported water flow, water droplets and underwater GOD rays to new graphics pipeline. - New: Added post effect stage for new graphics pipeline. - New: Ported TexBlurAnisotropicVertical to new graphics pipeline. - New: Added terrain debugpass. - New: Added terrain zprepass. - New: Restored vegetation billboards (by default disabled by CVar). - New: Added CV_CameraRightVector to CBPerViewGlobal. - New: Expose new IRenderer::CreateTextureArray function. - New: Added defrag allocator support for validating that unpinned blocks don't change content over time. - Refactored: Remove deprecated Cloud entity. - Refactored: Added DecalAngleBasedFading parameter for e.g. for snow effect on objects. - Refactored: Removed obsolete property Waves Speed for Ocean Animation. - Refactored: Removed deprecated VolumeObject entity. - Refactored: Small changes for compatibility with VecH types. Added UFloat4.Load function - works with scalar or SIMD types simplifying lots of code. - Refactored: TerrainLayer shader improvements: Enabled in-shader real-time diffuse texture high-pass filter with radius adjustable in material properties. This removes the requirement to high-pass textures in RC and from now on the same texture file may be used for terrain and other objects. Also supported direct diffuse texture mode allowing to bypass any terrain specific diffuse texture processing. This simplifies the matching of the look of terrain and other objects. - Refactored: Removed rendering functionalities in old graphics pipeline such as rain, snow and water normal gen. - Fixed: NaN when %DEPTH_FIXUP is enabled. - Fixed: Stack stomp caused by cubemap uploads. - Fixed: SVOGI specular tracing flicker. - Fixed: Disable sharing of compiled object between shadows and non-shadows as this is not thread safe. - Fixed: Race condition between gbuffer rendering and shadowmap prepare - same compiled object could be used for gbuffer rendering while being compiled for shadowmaps. - Fixed: SVOGI not waiting for shadow rendering to complete. - Fixed: Volume texture updates. - Fixed: Broken "e_debugdraw" modes. - Fixed: Crash when removing entities on dedicated server. - Fixed: CRenderMesh and CRenderElement leaks when chain-loading levels. - Fixed: Broken glass shader with NEAREST Flag - Fixed: Fixed volume based tiled shading for non-opaque objects. - Fixed: Assertion triggered when rotating a water volume around the X And Y axis. - Fixed: Assertion is triggered when a water volume material asset is set to Ocean Material property in Level Settings. - Fixed: Adjusted GI temporal reprojection. - Fixed: Made each utility pass a unique instance within CStandardGraphicsPipeline::Execute. - Fixed: Crash happens in CStandardGraphicsPipeline::ExecuteAnisotropicVerticalBlur. - Fixed: Made scissor-rectangle culling always enabled to adapt DX12 bahavior. - Fixed: CRenderPrimitive wrongly reuses CDeviceGraphicsPSO when vertex format changes. - Fixed: Discrepancies between VR implementations. - Fixed: Added tessellation to debug passes. - Fixed: Directional map interpretation for hair. - Fixed: SelectionID pass depth buffer. - Fixed: Cubemap streaming upload assert. - Fixed: Clear rendertargets of HDRPipeline before use. - Fixed: Numeric coding of texture types, can't be changed in any way. - Fixed: Broken LOD dissolve. - Fixed: Broken HUD silhouette rendering. - Fixed: Fixed shader asserts / crashes in NV Multires mode. - Fixed: Fixed crash during Shaders generation. - Tweaked: Fixed FileAccessScope error from warning. - Tweaked: Made SkipAlphaTest a cheat and suppress interpretation in release builds. Volumetric Fog - Optimized: Reduced draw calls of clip volume rendering for volumetric Fog. This can't be used on PS4 because currently depth stencil array texture is prohibited on PS4 renderer. - Fixed: A potential issue that wrong sampler state could be used. - Fixed: Shader compilation error on PS4. - Fixed: Added the implementation for not using depth stencil array texture - currently needs CryRenderD3D11 instead of CryRenderGNM. - Fixed: Crash happens at CCryDX12DeviceContext::BindResources in recursive render pass when activating DX12 renderer. - Fixed: Crash happens at CWaterStage::PreparePerPassResources when moving to the level which doesn't have global environment probes. - Fixed: Clip Volume texture for volumetric fog isn't initialized properly when shaders are compiled asynchronously. - Tweaked: Time of Day default values to make volumetric fog look physically correct. Volumetric Clouds - New: Added noise texture properties for Volumetric Cloud. - Fixed: Fixed CloudBlocker entity doesn't work. 3D Engine - New: Support for integration of object meshes into terrain mesh. Allows overcoming multiple limitations of current heightmap editor. - New: Checkboxes for the GameToken node to trigger on start and to only trigger when the value changes. - New: Warnings for Designers about forced conversions and missing tokens. Remove spurious warning for not found graph tokens. - New: Added RenderNodeStatusListener to Engine interface. - Refactored: Removed traversal over render node slots (because now we only have one). - Refactored: Removed obsolete parameters in a few functions in IRenderNode interface. - Refactored: Movable brush render node separated into stand-alone eERType_MovableBrush type. - Refactored: Reduce the default game template files required by the Engine. - Refactored: Tokens have their default value on game start and Sandbox Editor jump to game without triggering. - Refactored: Added CVar to control integration of object meshes into terrain mesh. - Optimized: Removed IEntity* m_pOwnerEntity from IRenderNode (saved 30MB). - Optimized: Level load/unload speedup by adjusting cached shadows processing. - Optimized: Cache comparison values in the nodes with correct type. - Optimized: (SVOGI) Optimized global env probe search code. - Optimized: (SVOGI) Underground voxelization is allowed for selected brushes ("Support Secondary Visarea" flag). - Optimized: (SVOGI) Added thread affinity CVars for GI voxelization (for consoles). - Optimized: (SVOGI) Added budget CVar (e_svoMaxAreaMeshSizeKB) limiting amount of geometry voxelized per area (by skipping less important objects). - Fixed: (Vegetation) Crash when enabling painting certain vegetation assets with "auto merged" enabled. High-poly objects are automatically skipped from merging if number of polygons is higher than e_MergedMeshesMaxTriangles. - Fixed: Crash on dedicated server during the level unload. - Fixed: (VR) Oculus crashes on startup. - Fixed: Support coll_type in surfacetypes for individual phys proxies of a node. - Fixed: Crash in IMaterial release by compiled render object. - Fixed: Buffer overrun on load level (reproduced in Win32 build). - Fixed: Restored capsule shadows support. - Fixed: Restored LOD dissolve on entities. - Fixed: Dangling entity pointer workaround. - Fixed: StatObj leak caused by refcount-deadlock. - Fixed: Workaround for terrain texture exported in a platform incompatible format: decompression on CPU. - Fixed: Assert on removing graph tokens. - Fixed: Problem with graph token registration - made obvious when having tokens with the same name on different FG Modules. - Fixed: Static object GetExtent not working when render mesh was not yet loaded. - Fixed: Tokens have the data type that is defined instead of always string, plus fix wrong type conversions. - Fixed: Character capsule shadows. - Fixed: Durango Debug/Release compilation. - Fixed: An issue with foliage cleanup. - Fixed: (SVOGI) Added protection against rare GPU hang. - Fixed: (SVOGI) Incorrect unload of old unused voxel segments. - Fixed: (SVOGI) Cases when GI voxelization skipped some areas (on consoles). - Fixed: (SVOGI) Interiors are rendered too bright in daylight. - Fixed: CBrush::OnRenderNodeBecomeVisible is not Thread Safe. - Fixed: Vegetation billboards - Fixed unwanted processing if feature is not used, (caused crash in some cases). - Fixed: Geometric mean calculation of sub-objects. - Fixed: Broken SVOGI voxelization. Fixed analytical proxies not working together with voxels. - Fixed: SVOGI memory corruption. - Fixed: MP - Let your avatar be killed several times triggers assert. - Fixed: Activating auto merge feature on dead tree assets crashed the Sandbox Editor. - Fixed: Placing several GeomChache-entities will make them look blurred. - Fixed: Missing UI reaction for 'objects into terrain mesh integration' feature. - Fixed: CVar "e_ViewDistRatioLights" not being updated. - Fixed: (VR) Oculus headset works, but GameLauncher.exe-window only shows black screen. - Tweaked: Profile marker tweaks. - Tweaked: Add graph name to game token warnings. Particles - New: MotionCryPhysics now has option for Mesh (rigid-body) physics. SurfaceType implemented as dynamic enum; Density converted to g/ml. - New: Particle templates. - New: Added Collision.RotateToNormal option (useful for SecondGenOnCollide spawning). Added Decal.Thickness option, pfx1 conversion for Decals. - New: CVar to automatically replace pfx1 effects with pfx2 in-game. Also conversion tweaks. - New: More improvements to particle attribute implementation - setting age to 1.0 is enough no need to set ES_EXPIRED. - New: (Entity) Added spawn params structure to ParticleEntity. - New: Added spawn params to particle emitter. - New: Implemented count scale, speed scale and size scale from SpawnParams. - New: (Entity) Particle entity only restarts emitter when needed and not any time serialization is executed. - New: (Entity) Added restart and kill buttons to particle entity. - New: Particle color attributes are stored as ColorB. - New: (Entity) Particle entity can change custom emitter attributes. - New: (Schematyc) Particle component can change emitter attributes. - New: (Schematyc) Added SetParameters node to particle component to change spawn params settings during runtime. - New: (Schematyc) Added set attributes nodes. - New: Reset particle emitter entity connection when a GeomRef is specifically set. - New: (Schematyc) Added SetSpawnGeometry node to particle component and GetGeom node to geometry component. - New: Added axis option to location circle and velocity cone. - New: Camera Distance time source. - New: SpawnParams now allows to override particle spec. - New: Added memory utilization stats per runtime. - New: Added offset to render sprites. - New: Added camera offset to sprites. - New: Bring back attribute modifier - more practical than just using Linear modifier. - New: Added offset option to ribbons. - New: Initial version of Render:Decals. - New: Added debug draw bounding box. - New: Moved particle display stats to particle system and changed the way it gets displayed. - New: New Wavicle display and statoscope stats. - New: "updated particle" stats should always be consistent. - New: Much improved collision behavior, no more missed collisions, consistent bounce and slide. Log collision only when not sliding. Use faster and more reliable Terrain.RayTrace. Added sliding friction. - New: Implemented priming - set update time to equilibrium time when primed. Should only occur on level startup or when testing. No update time subdividing yet, many functions currently do not work well with long update times. - New: (Flowgraph) Added AttributeSet and AttributeGet flow nodes. - New: SpawnDistance now uses actual inter-particle distance, rather than sometimes-absent velocity. - New: More conversions from pfx1 - Connection, TailLength, OnParentCollide, Restart (Pulse), more Facing modes, Spiral (Turbulence), Collisions, LightSource, AudioSource, BindEmitterToCamera, Visible Indoors/Underwater, SortBias, Comment. - Refactored: Improve UpdateRanges to allow range-based looping. - Refactored: Removed legacy particle attribute integration with LUA. - Refactored: SpawnParams now has regular Serialisation function. - Refactored: Reimplemented attribute type auto-conversion using a type table instead of switch cases. - Refactored: Removed some render sprites static dispatching - no performance implication. - Refactored: Merging write to GPU functions in feature sprites - no performance implication. - Refactored: Decoupled FeatureCollision from FeatureMotion, added EUL_PostUpdate. - Optimized: Reduce the number if 3D Engine re-registers by caching an inflated bbox. - Optimized: Do not update emitters which are set to invisible. - Optimized: Only update emitter vis and phys environment when cached bounds also change. - Optimized: Created separate vectors for CPU and GPU runtimes to prevent some virtual calls. - Optimized: Avoiding some asserts in profile builds and only have them in debug builds instead. - Optimized: Improved DragFastIntegral, much more accurate for long update times, almost as fast as quadratic. - Fixed: Bug fixes from dev_wavicle: out-of-bounds array crash on instance restarts; incorrect stream access in ParentSpeedSampler; null-stream access in SpawnParams code; bug in FieldColor modifiers (HNT-17675). - Fixed: Crash when changing spawn in direction of effect while running. - Fixed: Implemented Get/SetOwnerEntity. Emitters now search consistently for EmitGeom in all slots. - Fixed: Null-stream access in SpawnParams code. - Fixed: Changing effect options were not properly setting the asset to modified. - Fixed: Lifetime not properly clamped when modifiers can make them negative. - Fixed: Emitter ID improperly initialized - sorting profiler entries undefined. - Fixed: Crash fix in profiler - display profiler before trimming emitters. - Fixed: Color curve modifier was not using scale and bias parameters. - Fixed: Null pointer access in CParticleComponent::SetName. - Fixed: RenderMeshes and LocationGeometry attachment now properly use particle sizes. Load full mesh w/o streaming when accessing pieces. - Fixed: Location Bind to camera was not properly aligning particles when the camera moved. - Fixed: Particles killed by KillOnParentDeath were not triggering SecondGen:OnDeath. - Fixed: Extra Instance creation that could double particles created. Fixed Emitter.EmitParticle and AddInstance. Streamlined some functions. - Fixed: Particle effect edit version takes into consideration all component material modification ID - prevents renderer crashes after editing a material. - Fixed: Enforce hard limits on Params. - Fixed: (Entity) Properly setting attribute table when jumping into game mode. - Fixed: Independent emitters could sometimes stay active forever. - Fixed: (UI) Fixed Load Selected Entity and Particle Edit buttons. - Fixed: First generation immortal particles where not being removed after inactivating an emitter. - Fixed: Ribbons lighting in free mode. - Fixed: Screen space collisions in fluid dynamics. - Fixed: CRY_DEBUG_PARTICLE_SYSTEM to actually define pragmas. - Fixed: More timing fixes - SecondGenOnDeath now spawns at correct sub-frame time. NormalAge again allowed > 1 to determine death time. Spawn start age now more accurate. Merged TriggerParticles functions. - Fixed: Removed all runtime instances when emitter is deactivated; prevents emitters which are supposed to be dormant from creating new particles. - Fixed: Improper physic and vis environment initialization when emitter bounding box changed sizes. - Fixed: Collision improvements - Collisions now more accurate using prev and cur positions. Combined collision detection and response. SecondGenOnCollision is now sub-frame correct. Removed threshold on collision frame-time. Changed contact flags to much simpler bitfield. Generalised TriggerParticles to take SInstance array. - Fixed: Avoiding texture memory leaks. - Fixed: Several pfx1 conversion issues. - Fixed: The feature AudioTrigger now allows both a start and stop trigger in the same feature - makes it consistent with other audio triggers in the Engine. - Fixed: Crash in creating unique component name, and generalized to work with no number limit. - Tweaked: RenderSprites.axisScale default = 0. PS4 Renderer - New: Ported to new render pipeline - New: Support for HW Tessellation pipelines (on-chip only). - New: Support for GeometryShader pipelines (on-chip and off-chip). XBox One Renderer - Optimized: Durango texture defrag support. - Fixed: Fixed Z-target downsampling. - Tweaked: Fixed texture pool size to 1.5GB - streaming now uses texture pool size as a guide and relies upon defrag to ensure allocation success. Physics Physics - New: Added standalone phys debugger and support for binary world dumps. - New: Added ground planes to world dump. - New: Storing partid in contacts result when doing PWI. - New: Support multiple grids/local simulation (initially disabled). - Optimized: Support for large Z coordinate in entity grid. - Optimized: Tweaked proxy helpers transparency settings. - Optimized: Tweaked triangle hash grids for flat objects. - Fixed: Ray hit dist is >0 during underflows in faraway rays. - Fixed: Restored phys debugger compilability. - Fixed: RemoveGeometry will remove all parts with the same ID. - Fixed: Potential multithreading issue. - Fixed: Restored phys debugger compilability after CryCommon changes. - Fixed: Small issue with writing dumps. - Fixed: Tweaked collision safety for fast objects. - Fixed: Disabled ragdoll step rollback on deep collisions since it wasn't working well. - Fixed: Fixed vehicle wheel sync issue. - Fixed: Cloth scaling. - Fixed: Some on-demand physics fixes. - Fixed: Box-cylinder unprojection issue (false contact rejection). - Fixed: Issue with obb-based brush grid registration plus potential MT lock with traceable parts. - Fixed: Removed temp test bounciness disabling. - Fixed: Issue with AABB tree rebuilding for 0-tri meshes. - Fixed: Entities not deleted if no immediate deletion listeners registered. - Fixed: Small issue with vehicle wheel position update. - Fixed: Issue with certain types of custom intersection checks. - Fixed: Some rare collision issues. - Fixed: Preserve StatObj pointer when converting box to mesh for breaking. - Fixed: Improved/capped ragdolls' rotation. - Fixed: rwi_ignore_terrain_holes. - Fixed: Issue with entity deletion purging. - Fixed: Exporting a CGF with 16-bit positions will crash the Sandbox Editor and the Engine. - Fixed: Player not responding to gravity changes. - Tweaked: Renaming variables to solve name collision with define "slots" in Qt. PhysX - Optimized: Scene tweaks (always CCD, enable stabilization). Improved support for pe_params_part, enabled scratch buf. - Fixed: Support vehicles with more than 4 wheels. Network Network - New: IEntityComponent-based entities have gained networking support: - All IGameObject/CGameObject dependencies have been removed. - RMIs are supported by explicitly Register/Invoke . - Game code can override IEntityComponent::NetSerialize for aspects serialization. - Introduced INetEntity/CNetEntity as a part of CEntity to encapsulate networking. - Refactored: CGameObject-based entities were missing network profiles after INetEntity refactoring. - Refactored: CGameObject serialization wasn't invoked properly due to a renamed method NetSerializeEntity. - Refactored: Aspect hashing removal. It was disabled and superseded by partial updates. - Fixed: Debug printing of memory statistics in net_meminfo 4. - Fixed: DEBUG_KIT compilation. - Fixed: PersistantDebug text flickering. - Fixed: Physics proxy wasn't properly serialized for GameObjects (affected vehicles). - Fixed: (DedicatedServer) Crash due to the memory corruption on dedicated server start. - Tweaked: Dedicated server fixes. Project System Projects - New: Automatically generate CMakeLists.txt and solutions for C# projects. - New: Added Package Build option to the cryproject files which exports projects to a redistributable state. - New: Introduce new 'sys_project' CVar for specifying which .cryproject to load, allows for absolute and relative (to the Engine directory) paths. Default is 'game.cryproject'. - New: Allow specifying Engine version '.' indicating that Engine is in the same directory as the .cryproject file. - Refactored: Removed the Build Solution option from the cryproject files. - Tweaked: Handle case where value of "sys_project" is missing the ".cryproject" extension. - Tweaked: Always update window title with project name. - Tweaked: After switching the target engine for a project the solution is not automatically generated again. Sandbox Editor General - New: Show node Quicksearch when dragging an edge onto nothing. - New: Mark missing nodes in red in the Viewport. - New: (TrackView) Record icon is now red. - New: (TrackView) Mousewheel above the track tree will scroll while mousewheel above the dopesheet will zoom. - New: Question dialogs cannot be cancelled unless they have a cancel button. If not the question must be answered. - New: Replaced missing icon for scale snap. - New: Gravity volumes can now be edited with a gizmo. - New: Sample editor command registration. - New: Re-use previous tool positions when opening tools. - New: Moving assets. - New: Cleaned up Viewport context menu when selecting an object. Added option to allow detaching of objects from group to the root or detaching them onto their parent group. This is also reflected on the main window menus. - New: Linking & attaching to group/prefab can now be done on the level explorer by dragging & dropping objects. - New: Allow linking objects that reside in different layers. - New: (CharacterTool) Separate debug draw of Modifier & Attachment gizmos. - New: Revive LOD Generator. - New: Snapping for scale. - New: Added option to hide/unhide Links to the Viewport Display Options menu. - New: New item "duplicate layer" in Terrain Editor. - New: Axis lock when moving curves while Shift is pressed. - New: On GameToken FG nodes: right-click > search token usage plus right-click > open token in library. - New: Canceling color picker with ESC button. - New: Terrain layer properties displayed in rows. - New: Added reference axes when using the Rotate Tool. Now drawing the pivot point of the object when performing a translation. - New: "Update Navigation Area" option in context menu for selected NavigationAreaObject. - New: Navmesh update mode in Sandbox Editor - update only after no new entity changes are received. - New: Add extra mode of interaction for rotation gizmos - move along tangent of circle. - New: Bring back option to display object frame of reference for selected objects. - New: (AISystem) MNM update progress displayed in Sandbox Editor notification center. - New: Level Explorer actions for Hide/Show All and Freeze/Unfreeze All and toggles. - New: Numeric fields converts ',' to '.'. - New: Double-click on "frozen" icon isolates object/layer as the only editable object in the layer. - New: Double-click on "visible" icon isolates object/layer as the only visible object in the layer. - New: Added Freeze/Hide All In Layer. - New: Bring back archetype entity attributes serialization in Database view, but implementing it through IEntityArchetypeManagerExtension. - New: Added button for legacy pickers to all resource selectors (where it makes sense). - New: Texture import is now supported from common image formats. - New: (CryPak) Added support for %ENGINE% alias to address paths coming from engine.pak whereas no prefix will look in the project folder. For backwards compatibility, if a path is not found in the project folder, then the engine folder will be used as a fallback. - New: (Asset Browser) Context menu with actions, thumbnails and thumbnail view. Removed preview window. Engine assets now handled properly in a separate folder. - New: Level creation now integrated with Asset System. - New: Particle Editor now integrated with Asset System. - New: Added read-only folders and assets in the Asset Browser. - New: Added asset dependency tracking and dependency graph viewer. - New: Added re-import of assets. - New: Generate material, model thumbnails. - New: Validate RC version in Sandbox Editor. - Refactored: Set Paste with Links through Ctrl+V and Paste without through Ctrl+Shift+V. - Refactored: Cleanup - remove unused special drag mode for edges. - Refactored: Fix regression - Graph disabled, status not visible. - Refactored: Cleanup the Flowgraph tree list. - Refactored: Rename files and organize them in folders. Split the Component and Tree list to its own files. - Refactored: Restructuring AI menu and commands in Editor, added the possibility to assign keyboard shortcuts. - Refactored: Selection update notifications are now sent in all selection altering code paths. - Refactored: Separate Flowgraph load of Global and Level FG Modules. - Refactored: Fix Editor slowdowns related to Flowgraph Modules and Flowgraph reloading. - Refactored: Removed Fetch/Hold code, functionality was no longer used or desired. - Refactored: Re-implemented terrain resize operation without level Hold/Fetch. - Refactored: Initialize EditorCommon's global IEditor pointer at the beginning of CEdtiorImpl constructor instead of at the end. This is to allow EditorCommon sytems to access GetIEditor on initialization. - Refactored: Enable undo on QPropertyTree in MBT Editor, DescriptorDB and Archetype DatabaseView. - Refactored: (SVOGI) Added "Global illumination" flag into designer object properties. - Refactored: Removed the need for registering resource pickers on Editor plugins side - using a macro is now sufficient. - Refactored: Change rotation gizmo UI - remove dotted line, add arrows to denote interaction direction. - Refactored: Cleanup unused functions from Viewport, Aspect ratio and window rectangle. - Refactored: If property widgets are left without observable items, delete them and unlock the parent inspector. - Refactored: (Sandbox plugin system) Added SmartObjects editor plugin, sample editor plugin and MFCTools plugin. Added placeholder dialog editor and vehicle editor plugins. - Refactored: Moved Yasli code into CRYENGINE code solution. - Refactored: Draw thick lines with geometry shader. - Refactored: Remove obsolete cryasset tag. - Refactored: Move asset types from MeshImporter to Sandbox Editor. - Optimized: Cleanup FG Tree Reload on Object Change - improves level loading and object editing. - Optimized: Fix slowdown in ReloadEntityScripts. - Optimized: Fix slowdown when creating a new module on production levels. - Optimized: Review FG UI reloading. - Optimized: ClipVolumeObject is updated, only when it was changed before exporting it's data. - Optimized: Replaced all MFC numeric dialog boxes. - Optimized: AI Cleanup - Removing old navigation system and clearing AI code in Editor. - Optimized: DisplayContext only pushes matrices to AuxGeometry when matrices actually change, and not before each draw call. - Optimized: Remove CPU side transforms from DisplayContext pipeline and use GPU side transforms instead. - Fixed: Now using exponential moving average to calculate if we need to stop processing Engine updates if the UI is being hovered/used. This is necessary to keep UI snappy when the Engine is running at low frame rates. - Fixed: Group properties showing up again in properties panel. Removed group operators from base object, since these actions could be accomplished by many other means and without polluting the already overcrowded properties. - Fixed: Issues with quick search node. - Fixed: Restore column order when setting state of Level Explorer. - Fixed: If an object is attached and position doesn't need to be recalculated make sure to recalculate it's new world matrix after the parent is assigned. - Fixed: (Terrain Editor) Objects float when using Raise/Lower and Smooth operation in combination with Undo. - Fixed: Cases where invalid Qt icons were used - added assertion to avoid regression in the future. - Fixed: Currently selected graph not visible in the tree when selected from the search. - Fixed: Multiple issues with the graph tree such as wrong names when entities are added or renamed, graphs in the wrong place on prefab creation, constant folding and flickering and performance issues. - Fixed: Relative paths to textures in materials weren't reflected while syncing to 3ds Max. - Fixed: Missing throw for std::runtime_exception is some python functions. - Fixed: Renamed "Freeze" button to "Apply" in Designer->Subdivision so that it's more intuitive. - Fixed: Masked selection works for brushes when they are in a group. - Fixed: Gizmo should now appear in the correct position when modifying designer vertices. - Fixed: Mannequin preview works even without a level loaded. - Fixed: Search results turn white when sorted. - Fixed: Now it's possible to paint vegetation only on brushes. - Fixed: Always show "Archetype Entity" category. - Fixed: Loading of Archetype Editor object. - Fixed: Duplicating an object should now have it move in the correct coordinate system. - Fixed: Crash when dragging through check boxes in the property tree where it was recreating the property tree and invalidating a row that was later caching. - Fixed: Ensure Ctrl-Tab is handled by tool tab widgets. - Fixed: Pressed shift and ctrl doesn't block panning of the camera. - Fixed: Changed wording in notification when there's an action to execute. - Fixed: Now roof audio occlusion should show as intended. - Fixed: Imported *.obj-file doesn't show up in MI. - Fixed: Hidden solids do not show up when helpers turned off. - Fixed: Slider in Designer Tool subdivision moves only 1 step when clicked. - Fixed: Rotation gizmo disc makes at most one lap when rotating. - Fixed: Issue of undo for uniform scale not being pushed to undo history. - Fixed: Removed "cancel" option in dialogs where "no" and "cancel" had the same effect. - Fixed: Better handling of closing question dialogs - closing will now cancel instead of giving "no" answer. - Fixed: Now it's possible to rotate multiple objects using track ball and view-aligned gizmo in objects' local space. - Fixed: Highlighting for rotation gizmos when looking at them from the side. - Fixed: Occasional problem when keys do not get added on straight curves. - Fixed: Selecting an item in a Database View will now always work if the item exists in this view. - Fixed: Layer rename now removes the old layer from disk on save. - Fixed: Changed UI description for Cube Editor, mousewheel actually changes the tool's mode and not the brush size. - Fixed: Crash renaming a lense flare effect. - Fixed: Added context menu on Level Explorer when selecting a folder to allow importing a layer within the folder. - Fixed: Level closing optimizations - in particular do not invalidate matrices when closing, as this triggers a lot of unnecessary work. - Fixed: Crash when changing the type of view in the level explorer, and find objects when there were filters that weren't shared between the different views. - Fixed: Deleting layer doesn't change active layer. - Fixed: Mouse Wheel rotation of duplicated object. - Fixed: When duplicating a pair of objects that have an entity link, if the entity link name corresponds to the previous targets name, then the name will be adjusted to match the new target. - Fixed: Auto-update all instances checkbox in the properties panel where it was simply not toggleable at all. - Fixed: Creating a layer in the Level Explorer will no longer throw a warning saying that a layer with the same name already existed. - Fixed: Prefab children picking functionality in the Properties Panel is no longer triggered when single clicking the field. Also added functionality that allows selecting the object on double click. - Fixed: Entity links pointer was dangling while removing all entity links. - Fixed: Crash when Curve Editor is active in the Particle Editor and owner is changed (Self, Parent). - Fixed: Assert on saving level and changing camera-angle. - Fixed: Memory leak in ReloadEntityScripts. - Fixed: Entity node changes via LUA scripts not applied on ReloadScript of the entity or of all entities. - Fixed: Graph loosing changes if not open when adding/removing other graphs. - Fixed: Graph marked as unchanged if open when adding/removing other graphs. - Fixed: Activating debug on a graph marks it as unchanged. - Fixed: Save dialog for new FG modules. - Fixed: Clicking on a graph marks it as unchanged. - Fixed: Mouse should still not trigger when the left mouse button is released. - Fixed: Tooltip not showing up in Level Explorer. Tooltip now using correct background color and margins. - Fixed: Floats rounded to 2 decimal places when set in the 'Console Variables' window. - Fixed: Cannot set the value of string CVars in the 'Console Variables' window. - Fixed: Level Explorer will populate its content properly when loading a level in "show active layer" mode. - Fixed: Sorting in the Level Explorer. - Fixed: Background color UI in LOD generator. - Fixed: Made it possible to rename entity links from the Properties Panel. - Fixed: Designer Tool no longer uses custom styling. - Fixed: Disabled double click to expand groups/prefabs/layers in the Level Explorer. Double clicking in this specific tree view should set a layer as active or select and go to an object. - Fixed: Removed deprecated vegetation imposters display option. - Fixed: Layer Picker can now only select object layers. - Fixed: Go To dialog goes to world origin when pressing OK. - Fixed: Changing AI preferences from main menu will now trigger preferences to be saved out. This fix should apply to any other preference that also has a menu option. - Fixed: Rotating axis when aligned with view. - Fixed: Make Qt read-only properties not display in red. - Fixed: Suppress warning on missing tokens for MaterialFX graphs before the level and token libraries are loaded. - Fixed: Changes not detected in module graphs when editing graph tokens. - Fixed: Preserve visible Level Explorer columns when loading new level. - Fixed: Remove irrelevant warning with Prefabs event source node in FG. - Fixed: Crash when deleting multiple FG nodes. - Fixed: Malfunctioning FG simple comment editing. - Fixed: A material which has MTL_FLAG_NOPREVIEW is rendered to material preview window. - Fixed: Area solid export which sometimes produces corrupted files. - Fixed: The Flowgraph window should never open more than once. - Fixed: Crashes on GoTo Game Token definition. - Fixed: Texture preview of selected layer in Terrain Editor. - Fixed: A less intrusive approach to fix missing enum strings on UI side. - Fixed: Made it so the Editor action is checked for physics single step mode when enabled. - Fixed: Slowdown and potential crash on saving archetypes. Archetypes can now be refreshed manually instead of on save. - Fixed: Object is highlighted in the vegetation list when clicked in move mode. - Fixed: EnvironmentProbe is missing some properties. - Fixed: Issue when trying to edit a shape while the Rotation Tool was selected resulting in a crash. - Fixed: Changed freeze/hide all except specific layer to behave exactly the same. Removed functionality for caching layer visibility when performing operation. - Fixed: Issue with designer objects when trying to modify them in parent space, where it should be assumed that the parent is the actual object that is being modified rather than it's actual parent. - Fixed: Export of AreaSolids. - Fixed: Wrong tooltip on filter button. - Fixed: Alignment of the snapping and ports with the grid. Copy/paste and add nodes also follows the grid. - Fixed: An imported PFX1 is displayed with its whole path. - Fixed: Changed Clone Tool grid snapping to behave the same as when not dealing with cloning at all. Also added missing handling of normal snapping when using the axis constraint gizmos. - Fixed: Debug toggle for modules. - Fixed: No longer able to add commands to a toolbar if they're already a part of that toolbar. - Fixed: Disabled showing LODs in create objects panel. - Fixed: Attaching to prefab from group controls in properties panel. - Fixed: Issue where resetting the material on an object within a prefab was not resetting it for all instances. - Fixed: Suspend game input if leaving fullscreen mode and game is running. - Fixed: Deleting a child from a prefab from the properties panel would clear the prefab (at least just visually in the Viewport). - Fixed: Undoing changes to entity properties which was not possible. - Fixed: Crash when quickly and repeatedly toggling fullscreen. - Fixed: Crash when exporting CVars. - Fixed: Correctly restore tree view columns from layout. - Fixed: Mouse events in Viewport working incorrectly on HiDPI screen. - Fixed: Check all key sequences when forwarding shortcuts to main window, not just first one. - Fixed: (NavigationSystem) Navmesh is not updated when terrain geometry isn't edited. - Fixed: 'ReloadGeometry' erases the properties of all geom entities in the scene. - Fixed: Issue that was causing objects to have duplicate names when loading an existing map. - Fixed: Issue where changes to vis areas height were not being reflected on the Viewport/game. - Fixed: Favorite filter button in CVar list. - Fixed: Issue with sorting favorites in CVar list. - Fixed: Making a child layer visible should make the parent layer visible as well (but not all it's parent's children). - Fixed: Previous state will not be remembered by layers when toggling all other layers to be hidden. - Fixed: Unhiding objects will now only modify objects within layers that are not hidden. - Fixed: Crash when trying to pick for a prefab and properties window has been closed. - Fixed: PropertyRowResourceFilePath freezing QPropertyTree inside MFC windows. - Fixed: CAutoLogTime marker for entity archetypes database loading. - Fixed: Prefab's children transform invalidation during attachment to improve level loading times. - Fixed: For numbers in object names being ignored when creating a new object. - Fixed: When a group is open use information from child objects for highlighting. - Fixed: Icons drawn on top of gizmos. - Fixed: Assert when exporting to Engine. - Fixed: Spline objects having way too high offset from ground during creation. - Fixed: Issue where selecting a CVar in the CVar dialog would return the wrong CVar. Also fixed an issue with double click not working. - Fixed: For assert that had to do with accepting notifications and dealing with combined notifications. - Fixed: QMenuComboBox that was asserting when an item was removed. This was due to the fact that it was changing the selected index and trying to toggle the deleted items state. - Fixed: Inspector locked feature not working anymore. - Fixed: Translation Gizmo not grid-snapping in regard to the grid manner. - Fixed: Generating a cubemap a second time triggers warning in NC (CAssetManager). - Fixed: Several issues with renaming materials in the Material Editor. - Fixed: Issue where grouping geom entities and creating prefabs had empty bounding boxes. - Fixed: (Terrain Editor) Objects float when using Flatten-tool. - Fixed: Very blurry textures in CGF preview. - Fixed: Removed redundant update of DesignerObject after exporting it's data. - Fixed: Removed warning in texture-previewer for pow2, as it is irrelevant for the preview (and not possible for cubemaps). - Fixed: Trigger texture loading at a slightly more robust moment. - Fixed: Double registration of materials (which triggered warnings). - Fixed: Material preview rect invalidation. - Fixed: Added texture-streaming hook allowing material previews to force streaming and resource update events. - Fixed: Duplicated Level Settings update calls during level load. - Fixed: Debugview buttons not toggled properly when CVars change. -: (FBX Importer) Compute normals option added. - Fixed: No selected material in the Material Editor after clicking on edit material in properties panel. - Fixed: Rectangular selection works properly for all objects. - Fixed: (FBX) Imported *.fbx with locomotion animation is missing root translocation. - Fixed: Designer Tool crashing when selection changes while in the UV Mapping Tool. - Fixed: Warning "Vegetation object is not suitable for merging". - Fixed: (Qt Track) Properties Panel only shows X-values of keys. - Fixed: (FBX Importer) "Generate texture coordinates" option added. - Fixed: (FBX Importer) Physical proxies have wrong transformation. - Fixed: (Mesh Importer) Mesh node can now be used as a bone. - Fixed: (Mesh Importer) Added UI option "Use 32 bit precision". - Fixed: Layers can now be changed using double click in the layer picker dialog. - Fixed: Area Solids and Clip volumes can be created again. - Fixed: Crash/stack overflow in UV Mapping Editor when using SmartSew. - Fixed: LODs will not show up in file dialogs (to select models). - Fixed: Cloning objects will now start their position at mouse cursor. - Fixed: Starting the game without an active Viewport. - Fixed: When opening in Explorer - A file path that doesn't exist will open its parent folder. - Fixed: File dialogs no longer clear input field when selecting directories. - Fixed: File-monitor trying to read busy files when re-exporting from 3ds max. - Fixed: (ParticleEditor) Fixed an issue that allowed to create disallowed connection which resulted in a crash. Fixed: (ParticleEditor) Fixed a crash when deleting a connected node. - Tweaked: Hide TestABF and TestABFNew panels from the LOD generator UI. - Tweaked: Add profile markers for level close. - Tweaked: (Trackview) Reordered toolbar buttons - playback is now more standard and add entity to Trackview is now first in its group. - Tweaked: New default Editor Layout with Content Browser. - Tweaked: Additional profile markers for loading times. - Tweaked: Add profile markers. - Tweaked: Selection on graphs is no longer persisted and is not included in Undo/Redo. - Tweaked: Added CVar sys_debugger_adjustments. When debugger present emitters created in Editor are inactive at start. - Tweaked: Corrected Icon for the entity containers. - Tweaked: Display tweaks. - Tweaked: Changing open pane default commands, to either open or focus if there's already one open. - Tweaked: Added commands for Vegetation Tool paint|erase|select. - Tweaked: Moved CEntity members around to avoid alignment bytes (reduces size from 432 to 416 bytes). - Tweaked: Create tool no longer uses personalization. Trackview - New: Magnetic snapping between keys or ruler marker. - New: TrackView Ruler markings snapping. - New: Adding frame snapping in TrackView. - Fixed: 'RenderSequence'-FPS-drop down looks bad. - Fixed: Change zoom/scroll keyboard and mouse bindings. - Fixed: "Sequence Properties" command would have been a better fit on View tab instead of File tab. Tools Tools General - New: Added new programs - KeyImport and KeyExtract that create a key.dat from public and private keys (and vice versa), but do not build them by default. - New: CryMaxExport, MayaCryExport and MotionBuilderCryExport compiling with CMake. - New: CryTIFPlugin compiling with CMake. - New: (MayaCryExport) Exporter for Maya 2017. - Fixed: Texturestreaming initialization when no device is present. - Fixed: (Export) Cryasset file can break 3dsMax export. - Fixed: Added include paths to CMakeLists.txt for stdafx and CryTIFPlugin.pipl. - Fixed: Inability to start dedicated server from project system. - Fixed: MeshBaker shader compilation. - Tweaked: Allow multiple runs of Statoscope during one game session. Resource Compiler - Fixed: Splitting of 32x32 DDNA textures is wrong.
https://docs.cryengine.com/pages/viewpage.action?pageId=44962702
2020-10-20T03:37:55
CC-MAIN-2020-45
1603107869785.9
[]
docs.cryengine.com
. The QGIS Browser is a panel in QGIS that lets you easily navigate in your filesystem and manage geodata. You can have access to common vector files (e.g., ESRI shapefiles or MapInfo files), databases (e.g., PostGIS, Oracle, SpatiaLite or MS SQL Spatial) and WMS/WFS connections. You can also view your GRASS data (to get the data into QGIS, see Integração GRASS SIG). Figure browser 1: Use the QGIS Browser to preview your data. The drag-and-drop function makes it easy to get your data into the map view and the map legend. There is a second browser available under Settings ‣ Panels. This is handy when you need to move files or layers between locations. QGIS automatically looks for the coordinate reference system (CRS) and zooms to the layer extent if you work in a blank QGIS project. If there are already files in your project, the file will just be added, and in the case that it has the same extent and CRS, it will be visualized. If the file has another CRS and layer extent, you must first right-click on the layer and choose Set Project CRS from Layer. Then choose Zoom to Layer Extent. The Filter files function works on a directory level. Browse to the folder where you want to filter files and enter a search word or wildcard. The Browser will show only matching filenames – other data won’t be displayed. It’s also possible to run the QGIS Browser as a stand-alone application. Start the QGIS browser Escreva na linha de comandos “qbrowser” . In figure_browser_standalone_metadata, you can see the enhanced functionality of the stand-alone QGIS Browser. The Param tab provides the details of your connection-based datasets, like PostGIS or MSSQL Spatial. The Metadata tab contains general information about the file (see Menu Metadados). With the Preview tab, you can have a look at your files without importing them into your QGIS project. It’s also possible to preview the attributes of your files in the Attributes tab.
https://docs.qgis.org/2.8/pt_PT/docs/user_manual/qgis_browser/qgis_browser.html
2020-10-20T03:25:05
CC-MAIN-2020-45
1603107869785.9
[]
docs.qgis.org
Managing Translations¶ This sections highlights the different ways to translate and manage XLIFF files. Fetching Translations¶ The interface of the Install Tool in ADMIN TOOLS > Maintenance > Manage language packs allows to manage the list of available languages to your users and can fetch and update language packs of TER and core extensions from the official translation server. The module is rather straight forward to use and should be pretty much self explanatory. Downloaded language packs are stored in getLabelsPath(). The Languages module with some active languages and status of extensions language packs Language packs can also be fetched using the command line. /path/to/typo3/bin/typo3 language:update Translating Locally¶¶ The $GLOBALS['TYPO3_CONF_VARS']['SYS']['locallangXMLOverride'] allows to override both locallang-XML and XLIFF files. Actually this is not just about translations. Default language files can also be overridden. In the case of XLIFF files, the syntax is as follows (to be placed in an extension’s ext_localconf.php file): $GLOBALS['TYPO3_CONF_VARS']['SYS']['locallangXMLOverride']['EXT:frontend/Resources/Private/Language/locallang_tca.xlf'][] = 'EXT:examples/Resources/Private/Language/custom.xlf'; $GLOBALS['TYPO3_CONF_VARS']['SYS']['locallangXMLOverride']['de']['EXT:news/Resources/Private/Language/locallang_modadministration.xlf'][] = 'EXT:examples/Resources/Private/Language/Overrides/de.locallang_modadministration.xlf'; The first line shows how to override a file in the default language, the second how to override a German (“de”) translation. The German language file looks like this: <?xml version="1.0" encoding="utf-8" standalone="yes" ?> <xliff version="1.0"> <file source- <header/> <body> <trans-unit <source>Most important tile</source> <target>Wichtigster Titel</target> </trans-unit> </body> </file> </xliff> and the result can be easily seen in the backend: Custom translation in the TYPO3 backend Important - Please note that you do not have to copy the full reference file, but only the labels you want to translate. - The path to the file to override must be expressed as EXT:foo/bar/.... For the extension “xlf” or “xml” can be used interchangeably. The TYPO3 Core will try both anyway, but using “xlf” is more correct and future-proof. Attention The following is a bug but must be taken as a constraint for now: - The files containing the custom labels must be located inside an extension. Other locations will not be considered. - The original translation needs to exist in getLabelsPath() or next to the base translation file in extensions, for example in typo3conf/ext/myext/Resources/Private/Language/. Custom Languages¶ Supported languages describes the languages which are supported by default. It is possible to add custom languages to the TYPO3 backend and create the translations locally using XLIFF files. First of all, the language must be declared: $GLOBALS['TYPO3_CONF_VARS']['SYS']['localization']['locales']['user'] = array( 'gsw_CH' => 'Swiss German', ); This new language does not need to be entirely translated. It can be defined as falling back to another language, so that only differing labels need be translated: $GLOBALS['TYPO3_CONF_VARS']['SYS']['localization']['locales']['dependencies'] = array( 'gsw_CH' => array('de_AT', 'de'), ); In this case we define that “gsw_CH” (which is the official code for “Schwiizertüütsch” - that is, “Swiss German”) can fall back on “de_AT” (another custom translation) and then on “de”. The translations have to be stored in the- <header/> <body> <trans-unit <source>Swiss German</source> <target state="translated">Schwiizertü second example of de_AT (German for Austria) - no fallback would have to be defined for “de_AT” if it were just falling back on “de”.
https://docs.typo3.org/m/typo3/reference-coreapi/10.4/en-us/ApiOverview/Internationalization/ManagingTranslations.html
2020-10-20T03:50:51
CC-MAIN-2020-45
1603107869785.9
[array(['../../_images/InternationalizationManageLanguagePacks.png', 'The Languages module'], dtype=object) array(['../../_images/InternationalizationXliffWithVirtaal.png', 'Virtaal screenshot'], dtype=object) array(['../../_images/InternationalizationLabelOverride.png', 'Custom label'], dtype=object) array(['../../_images/InternationalizationXliffCustomLanguage.png', 'User Settings screenshot'], dtype=object) ]
docs.typo3.org
Greenplum PL/Container Language Extension A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 5.x documentation. The plcontainer Utility.. Procedure. Docker feature developments are tied to RHEL7.x infrastructure components for kernel, devicemapper (thin provisioning, direct lvm), sVirt and systemd. - Docker is installed on Greenplum Database hosts (master, primary and all standby hosts) - For RHEL or CentOS 7.x - Docker 17.05..5.0.tar.gz - plcontainer-r-images-1 PL/Container To uninstall PL/Container, remove Docker containers and images, and then remove the PL/Container support from Greenplum Database. When you remove support for PL/Container, the plcontainer user-defined functions that you created in the database will no longer work. Uninstall Docker Containers and Images On the Greenplum Database hosts, uninstall the Docker containers and images that are no longer required. The plcontainer image-list command lists the Docker images that are installed on the local Greenplum Database host. The plcontainer image-delete command deletes a specified Docker image from all Greenplum Database hosts. - The command docker ps -a lists all containers on a host. The command docker stop stops a container. - The command docker images lists the images on a host. - The command docker rmi removes images. - The command docker rm removes containers. Remove PL/Container Support for a Database. Uninstalling PL/Container Language Extension If no databases have plcontainer as a registered language, uninstall the Greenplum Database PL/Container language extension with the gppkg utility. - Use the Greenplum Database gppkg utility with the -r option to uninstall the PL/Container language extension. This example uninstalls the PL/Container language extension on a Linux system: $ gppkg -r plcontainer. Greenplum PL/Python Language Extension. For information about the plpy methods, Greenplum PL/R Language Extension. For information about the pg.spi methods, see are case sensitive. < language. <command>/clientdir/pyclient., the aggressive virtual memory requirement of the Go language (golang) runtime that is used by PL/Container, and the Greenplum Database Linux server kernel parameter setting for overcommit_memory. The parameter is set to 2 which does not allow memory overcommit. A workaround that might help is to increase the amount of swap space and increase the Linux server kernel parameter overcommit_ratio. If the issue still occurs after the changes, there might be memory shortage. You should check free memory on the system and add more RAM if needed. You can also decrease the cluster load. - PL/Container does not limit the Docker base device size, the size of the Docker container. In some cases, the Docker daemon controls the base device size. For example, if the Docker storage driver is devicemapper, the Docker daemon --storage-opt option flag dm.basesize controls the base device size. The default base device size for devicemapper is 10GB. The Docker command docker info displays Docker system information including the storage driver. The base device size is displayed in Docker 1.12 and later. For information about Docker storage drivers, see the Docker information Daemon storage-driver. When setting the Docker base device size, the size must be set on all Greenplum Database hosts. -. dependencies required for Docker sudo yum install -y yum-utils device-mapper-persistent-data lvm2 - Add the Docker repo sudo yum-config-manager --add-repo - Update yum cache sudo yum makecache fast - Install Docker sudo yum -y install docker-ce - Start Docker daemon. sudo systemctl start docker -
https://gpdb.docs.pivotal.io/5250/ref_guide/extensions/pl_container.html
2020-10-20T03:56:13
CC-MAIN-2020-45
1603107869785.9
[]
gpdb.docs.pivotal.io
Document history The following table describes the important changes to the documentation since the last release of Amazon API Gateway. For notification about updates to this documentation, you can subscribe to an RSS feed by choosing the RSS button in the top menu panel. Latest documentation update: September 17, 2020 Earlier updates The following table describes important changes in each release of the API Gateway Developer Guide before June 27, 2018.
https://docs.aws.amazon.com/apigateway/latest/developerguide/history.html
2020-10-20T04:17:51
CC-MAIN-2020-45
1603107869785.9
[]
docs.aws.amazon.com
If you do not choose a Distribution Set type in the UI or in your REST call when creating a Distribution Set, the selected default Distribution Set type will be used. You need the following permissions to see the System Configuration view in… The cloud user has full access by default for all available service plans. See the security chapter for further information about available roles and their included permissions.
https://docs.bosch-iot-rollouts.com/documentation/userguide/systemconfiguration.html
2020-10-20T03:16:59
CC-MAIN-2020-45
1603107869785.9
[]
docs.bosch-iot-rollouts.com
Using Query Plan Graph View for Hive Web UI Visualize your query plans with the Query Plan Graph View in the Hive Web User Interface. Benefits of the Query Plan Graph View Use the Query Plan Graph View to: - Follow in real-time as your Hive query executes. - Watch MapReduce jobs unfold on the progress bar and examine their statistics. - See clearly which tasks in a plan are executed and which are filtered out. - Find logs for the individual stages of a query execution plan. - Spot errors quickly. - Pinpoint problems for failed or hanging queries. Query Plan Graph View Overview The query plan graph maps out every step of your query execution plan and indicates the current status of each step with a color: The Query Plan Graph View An example of the Query Plan Graph View during the execution process. Activating the Query Plan Graph View Follow these steps to enable the Query Plan Graph View in your HiveServer2 Web User Interface. - Log in to the Cloudera Manager Admin Console. - Click Clusters in the top navigation bar, and choose Hive from the list of service instances. - Click the Configuration tab on the Hive service page to edit the configuration. - In the Search field, search for Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml - Click the plus sign (+), and add the following configurations one at a time to the Service Advanced Configuration Snippet (Safety Valve):Configuring the Service Advanced Configuration Snippet (Safety Valve) - Enter a Reason for change, and then click Save Changes to commit the changes. - After your change is saved, click the restart icon above the Configuration tab. If you don’t see the reset icon, try refreshing your browser. - Click Restart Stale Services, and then click Restart Now. - Click Finish after the restart completes. Viewing the Query Plan Graph View View the Query Plan Graph in the query Drilldown page at any time during or after executing a query. To find it: - Log in to the Cloudera Manager Admin Console. - Click Clusters in the top navigation bar, and choose Hive from the list of service instances. - Click the HiveServer2 Web UI in the Hive toolbar, and select the HiveServer2 Web UI for the cluster node that you want to view. - On the HiveServer2 page, find your query and click Drilldown to view more details. The query could be listed under Open Queries or Closed Queries depending on its status. - Click the Query Plan tab on the Drilldown page to open the Query Plan Graph View. - Select any step of the plan to see a detailed view of that stage and its statistics. These appear below the graph.
https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/cm_hive_query_plan_graph_view.html
2020-10-20T04:27:40
CC-MAIN-2020-45
1603107869785.9
[array(['../images/query_plan_graph_view_screenshot.png', 'A screenshot of the Query Plan Graph View during the execution process.'], dtype=object) ]
docs.cloudera.com
Impersonation inside SQLCLR Stored Procedure [Jian Zeng] 'p
https://docs.microsoft.com/en-us/archive/blogs/dataaccess/impersonation-inside-sqlclr-stored-procedure-jian-zeng
2020-10-20T04:22:43
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
- Reference > - Operators > - Aggregation Pipeline Operators > - $isoWeekYear (aggregation) $isoWeekYear (aggregation)¶ On this page Definition¶ $isoWeekYear¶ New in version 3.4. Returns the year number in ISO 8601 format. The year starts with the Monday of week 1 and ends with the Sunday of the last week. The $isoWeekYearexpression has the following operator expression syntax: Changed in version 3.6. The argument must be a valid expression that resolves to one of the following: Example¶ A collection called anniversaries contains the following documents: The following operation returns the year number in ISO 8601 format for each date field. The operation returns the following results:
https://docs.mongodb.com/v4.0/reference/operator/aggregation/isoWeekYear/
2020-10-20T03:45:29
CC-MAIN-2020-45
1603107869785.9
[]
docs.mongodb.com
Fire Architecture¶ Fire consists of three core components: - Web Browser for defining end-to-end workflows for building data products and applications - Users interact with the web based drag and drop user interface for creating Datasets and Workflows - Workflows leverage the exhaustive set of functional and operational nodes such as Data Profiling, Data Cleaning, ETL, NLP, OCR, Machine Learning etc. displayed in the user interface. - Web Server running on an Edge node in a Apache Spark Cluster - For running the workflows, they are submitted to the web server. The web server submits the workflow to the Apache Spark cluster as a spark job using spark-submit. The results of the workflow execution are streamed back and displayed in the Browser. - Web Server provides a host of other features likes interactive execution, schema inference and propagation, user permissions and roles, LDP integration etc. - Apache Spark cluster on which the workflows are executed as Spark jobs - Workflows are saved in a JSON string. - Workflows can also be submitted on the spark cluster through spark-submit via a command line interface
https://docs.sparkflows.io/en/latest/architecture-deployment/architecture/index.html
2020-10-20T03:12:47
CC-MAIN-2020-45
1603107869785.9
[array(['../../_images/sparkflows-fire-architecture.png', 'Sparkflows Fire Architecture'], dtype=object) ]
docs.sparkflows.io
Difference between revisions of "Maintaining a Service" Revision as of 01:33, 6 November 2013 Contents Responsiblities Service Maintainers are what keeps COSI running smoothly, and from the Roles and Responsibilities page: - For each service, produce and maintain a document describing how the services are configured/started/shutdown, what version/type of server software is being run, requirements for running each service - First point of contact for problems with that service - Keep server software well patched - Produce some usage statistics on each service (bytes downloaded from mirror, connections to www, etc.) where possible and present to CSLAB-ADMIN board once per semester Knowledge Service Maintainers should have working knowledge of their server, and keep their Wiki page updated. Service Maintainers should also be prepared to instruct new members on how to keep the Service updated and running smoothly, to simplify passing the Service off. Background Knowledge This question gets raised fairly often, and the answer is that you do not need to know anything about the service to become a maintainer. If you are interested in a Service, contact the current maintainer! They should be more than happy to instruct you on how the Service functions. Service Maintainers Should Know - Where in the labs the Service is hosted. (e.g. Physical machine, VM Host, etc.) - What Operating System it is running, and how to keep it updated - Solution to common problems - How to configure any software in use - How the machine is configure on the network This list is not a requirement, and nor is it a complete list, though Service Maintainers are expected to at least have a rough understand how it works. Service Hand-off All Service Maintainers at some point should prepare (if appropriate), to hand off their Service to another member. Service handing-off can be made easier by doing the following: - Keep the wiki page up-to-date with all relevant documentation. (this is important, especially if for some reason there is no new maintainer) - Finding a successor long before departure. - Work with the new person, giving them access to the service, and showing them the ropes. - Completely hand off the server prior to departure, and act as a mentor to the new maintainer. Becoming a Maintainer If you are interested in becoming a maintainer, contact a current maintainer who is looking to pass off his/her Service, or contact a Director, who will happily find something for you. Alternatively, you are also welcome to provide a suggestion for a new Service, provided you able to build, and maintain it. Contact a Director, or VM Host Maintainer for more information on creating a new Service.
http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Maintaining_a_Service&diff=prev&oldid=5809
2020-10-20T04:00:24
CC-MAIN-2020-45
1603107869785.9
[]
docs.cslabs.clarkson.edu
Rule designer elements Rule designer provides drag and drop elements that can be used to create a business rule. These elements are available in the Palette section of the Rule designer user interface. Note Application business analysts can customize the objects developed in their own applications and that are marked customizable by the developers, but cannot customize the objects developed in com.bmc.arsys. For example, objects in core BMC applications like Foundation, Approval, and Assignment cannot be customized. The following table describes the different Rule designer elements: Related topic Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/helixplatform/rule-designer-elements-851870411.html
2020-10-20T03:25:49
CC-MAIN-2020-45
1603107869785.9
[]
docs.bmc.com
Configuring the Editor The WYSIWYG editor is enabled by default, and can be used to edit content on CMS pages and blocks, and in products and categories. From the configuration you can activate or deactivate the editor, and elect to use static, rather than dynamic, URLS for media content in product and category descriptions. TinyMCE 4 is now the default WYSIWYG editor. The implemented version, which is actually 4.6, offers an improved user experience and supports a wide range of WYSIWYG plugins. TinyMCE 3 is now deprecated, but can be enabled in the configuration if required for previous customizations. To configure the editor: On the Admin sidebar, go to Stores > Settings > Configuration. In the panel on the left under General, choose Content Management. Expand WYSIWYG Options and do the following: Set Enable WYSIWYG Editor to your preference. The editor is enabled by default. Set WYSIWYG Editor to the version of the TinyMCE editor that you want to use. TinyMCE 4 is the recommended and default editor. Set Static URLs for Media Content in WYSIWYG to your preference for all media content that is entered with the WYSIWYG editor. When complete, click Save Config.
https://docs.magento.com/user-guide/v2.3/cms/editor-configure.html
2020-10-20T03:47:12
CC-MAIN-2020-45
1603107869785.9
[]
docs.magento.com
Indexing policies in Azure Cosmos DB In Azure Cosmos DB, every container has an indexing policy that dictates how the container's items should be indexed. The default indexing policy for newly created containers indexes every property of every item and enforces range indexes for any string or number. This allows you to get high query performance without having to think about indexing and index management upfront. In some situations, you may want to override this automatic behavior to better suit your requirements. You can customize a container's indexing policy by setting its indexing mode, and include or exclude property paths. Note The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in Azure Cosmos DB's API for MongoDB Indexing mode Azure Cosmos DB supports two indexing modes: - Consistent: The index is updated synchronously as you create, update or delete items. This means that the consistency of your read queries will be the consistency configured for the account. - None: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the IndexTransformationProgress until complete. Note Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in inconsistent or incomplete query results. If you plan to query a Cosmos container, you should not select lazy indexing. In June 2020, we introduced a change that no longer allows new containers to be set to Lazy indexing mode. If your Azure Cosmos DB account already contains at least one container with lazy indexing, this account is automatically exempt from the change. You can also request an exemption by contacting Azure support (except if you are using an Azure Cosmos account in serverless mode which doesn't support lazy indexing). By default, indexing policy is set to automatic. It's achieved by setting the automatic property in the indexing policy to true. Setting this property to true allows Azure CosmosDB to automatically index documents as they are written. Including and excluding property paths A custom indexing policy can specify property paths that are explicitly included or excluded from indexing. By optimizing the number of paths that are indexed, you can substantially reduce the latency and RU charge of write operations. These paths are defined following the method described in the indexing overview section with the following additions: - a path leading to a scalar value (string or number) ends with /? - elements from an array are addressed together through the /[]notation (instead of /0, /1etc.) - the /*wildcard can be used to match any elements below the node Taking the same example again: { "locations": [ { "country": "Germany", "city": "Berlin" }, { "country": "France", "city": "Paris" } ], "headquarters": { "country": "Belgium", "employees": 250 } "exports": [ { "city": "Moscow" }, { "city": "Athens" } ] } the headquarters's employeespath is /headquarters/employees/? the locations' countrypath is /locations/[]/country/? the path to anything under headquartersis /headquarters/* For example, we could include the /headquarters/employees/? path. This path would ensure that we index the employees property but would not index additional nested JSON within this property. Include/exclude strategy Any indexing policy has to include the root path /* as either an included or an excluded path. Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended approach as it lets Azure Cosmos DB proactively index any new property that may be added to your model. Exclude the root path to selectively include paths that need to be indexed. For paths with regular characters that include: alphanumeric characters and _ (underscore), you don't have to escape the path string around double quotes (for example, "/path/?"). For paths with other special characters, you need to escape the path string around double quotes (for example, "/"path-abc"/?"). If you expect special characters in your path, you can escape every path for safety. Functionally it doesn't make any difference if you escape every path Vs just the ones that have special characters. The system property _etagis excluded from indexing by default, unless the etag is added to the included path for indexing. If the indexing mode is set to consistent, the system properties idand _tsare automatically indexed. When including and excluding paths, you may encounter the following attributes: kindcan be either rangeor hash. Hash index support is limited to equality filters. Range index functionality provides all of the functionality of hash indexes as well as efficient sorting, range filters, system functions. We always recommend using a range index. precisionis a number defined at the index level for included paths. A value of -1indicates maximum precision. We recommend always setting this value to -1. dataTypecan be either Stringor Number. This indicates the types of JSON properties which will be indexed. When not specified, these properties will have the following default values: See this section for indexing policy examples for including and excluding paths. Include/exclude precedence If your included paths and excluded paths have a conflict, the more precise path takes precedence. Here's an example: Included Path: /food/ingredients/nutrition/* Excluded Path: /food/ingredients/* In this case, the included path takes precedence over the excluded path because it is more precise. Based on these paths, any data in the food/ingredients path or nested within would be excluded from the index. The exception would be data within the included path: /food/ingredients/nutrition/*, which would be indexed. Here are some rules for included and excluded paths precedence in Azure Cosmos DB: Deeper paths are more precise than narrower paths. for example: /a/b/?is more precise than /a/?. The /?is more precise than /*. For example /a/?is more precise than /a/*so /a/?takes precedence. The path /*must be either an included path or excluded path. Spatial indexes When you define a spatial path in the indexing policy, you should define which index type should be applied to that path. Possible types for spatial indexes include: Point Polygon MultiPolygon LineString Azure Cosmos DB, by default, will not create any spatial indexes. If you would like to use spatial SQL built-in functions, you should create a spatial index on the required properties. See this section for indexing policy examples for adding spatial indexes. Composite indexes Queries that have an ORDER BY clause with two or more properties require a composite index. You can also define a composite index to improve the performance of many equality and range queries. By default, no composite indexes are defined so you should add composite indexes as needed. Unlike with included or excluded paths, you can't create a path with the /* wildcard. Every composite path has an implicit /? at the end of the path that you don't need to specify. Composite paths lead to a scalar value and this is the only value that is included in the composite index. When defining a composite index, you specify: Two or more property paths. The sequence in which property paths are defined matters. The order (ascending or descending). Note When you add a composite index, the query will utilize existing range indexes until the new composite index addition is complete. Therefore, when you add a composite index, you may not immediately observe performance improvements. It is possible to track the progress of index transformation by using one of the SDKs. ORDER BY queries on multiple properties: The following considerations are used when using composite indexes for queries with an ORDER BY clause with two or more properties: If the composite index paths do not match the sequence of the properties in the ORDER BYclause, then the composite index can't support the query. The order of composite index paths (ascending or descending) should also match the orderin the ORDER BYclause. The composite index also supports an ORDER BYclause with the opposite order on all paths. Consider the following example where a composite index is defined on properties name, age, and _ts: You should customize your indexing policy so you can serve all necessary ORDER BY queries. Queries with filters on multiple properties If a query has filters on two or more properties, it may be helpful to create a composite index for these properties. For example, consider the following query which has an equality filter on two properties: SELECT * FROM c WHERE c.name = "John" AND c.age = 18 This query will be more efficient, taking less time and consuming fewer RU's, if it is able to leverage a composite index on (name ASC, age ASC). Queries with range filters can also be optimized with a composite index. However, the query can only have a single range filter. Range filters include >, <, <=, >=, and !=. The range filter should be defined last in the composite index. Consider the following query with both equality and range filters: SELECT * FROM c WHERE c.name = "John" AND c.age > 18 This query will be more efficient with a composite index on (name ASC, age ASC). However, the query would not utilize a composite index on (age ASC, name ASC) because the equality filters must be defined first in the composite index. The following considerations are used when creating composite indexes for queries with filters on multiple properties - The properties in the query's filter should match those in composite index. If a property is in the composite index but is not included in the query as a filter, the query will not utilize the composite index. - If a query has additional properties in the filter that were not defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RU's than exclusively using range indexes. - If a property has a range filter ( >, <, <=, >=, or !=), then this property should be defined last in the composite index. If a query has more than one range filter, it will not utilize the composite index. - When creating a composite index to optimize queries with multiple filters, the ORDERof the composite index will have no impact on the results. This property is optional. - If you do not define a composite index for a query with filters on multiple properties, the query will still succeed. However, the RU cost of the query can be reduced with a composite index. Consider the following examples where a composite index is defined on properties name, age, and timestamp: Queries with a filter as well as an ORDER BY clause If a query filters on one or more properties and has different properties in the ORDER BY clause, it may be helpful to add the properties in the filter to the ORDER BY clause. For example, by adding the properties in the filter to the ORDER BY clause, the following query could be rewritten to leverage a composite index: Query using range index: SELECT * FROM c WHERE c.name = "John" ORDER BY c.timestamp Query using composite index: SELECT * FROM c WHERE c.name = "John" ORDER BY c.name, c.timestamp The same pattern and query optimizations can be generalized for queries with multiple equality filters: Query using range index: SELECT * FROM c WHERE c.name = "John", c.age = 18 ORDER BY c.timestamp Query using composite index: SELECT * FROM c WHERE c.name = "John", c.age = 18 ORDER BY c.name, c.age, c.timestamp The following considerations are used when creating composite indexes to optimize a query with a filter and ORDER BY clause: - If the query filters on properties, these should be included first in the ORDER BYclause. - If you do not define a composite index on a query with a filter on one property and a separate ORDER BYclause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the ORDER BYclause has a high cardinality. - All considerations for creating composite indexes for ORDER BYqueries with multiple properties as well as queries with filters on multiple properties still apply. Modifying the indexing policy A container's indexing policy can be updated at any time by using the Azure portal or one of the supported SDKs. An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old policy's index is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size. Important Index transformation is an operation that consumes Request Units. Request Units consumed by an index transformation aren't currently billed if you are using serverless containers. These Request Units will get billed once serverless becomes generally available. Note It is possible to track the progress of index transformation by using one of the SDKs. There is no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries. There is no impact to read availability when adding a new index. Queries will only utilize new indexes once the index transformation is complete. During the index transformation, the query engine will continue to use existing indexes, so you'll observe similar read performance during the indexing transformation to what you had observed before initiating the indexing change. When adding new indexes, there is also no risk of incomplete or inconsistent query results. When removing indexes and immediately running queries that filter on the dropped indexes, there is not a guarantee of consistent or complete query results. If you remove multiple indexes and do so in one single indexing policy change, the query engine guarantees consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine does not guarantee consistent or complete results until all index transformations complete. Most developers do not drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely. Note Where possible, you should always try to group multiple indexing changes into one single indexing policy modification Indexing policies and TTL Using the Time-to-Live (TTL) feature requires indexing. This means that: - it is not possible to activate TTL on a container where the indexing mode is set to None, - it is not possible to set the indexing mode to None on a container where TTL is activated. For scenarios where no property path needs to be indexed, but TTL is required, you can use an indexing policy with: - an indexing mode set to Consistent, and - no included path, and /*as the only excluded path. Next steps Read more about the indexing in the following articles:
https://docs.microsoft.com/en-gb/azure/cosmos-db/index-policy
2020-10-20T04:34:47
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
- Indexes > - Index Builds on Populated Collections > - Rolling Index Builds on Replica Sets Rolling Index Builds on Replica Sets¶.. Warning If you cannot stop all writes to the collection, do not use the following procedure to create unique indexes. Procedure¶ Important The following procedure to build indexes in a rolling fashion applies to replica set deployments, and not sharded clusters. For the procedure for sharded clusters, see Rolling Index Builds on Sharded Clusters instead. A. Stop One Secondary and Restart as a Standalone¶ Stop the mongod process associated with a secondary. Restart after. - Set parameter disableLogicalSessionCacheRefreshto truein the setParametersection. For example, the updated configuration file for a replica set member will include content like the following example: Other settings (e.g. storage.dbPath, etc.) remain the same. And restart: If using command-line options, make the following configuration updates: - Remove --replSet. - Modify --portto a different port. [1] - Set parameter disableLogicalSessionCacheRefreshto truein the --setParameteroption. For example, if your replica set member normally runs with on the default port of 27017 and the --replSet option, you would specify a different port, omit the --replSet option, and set disableLogicalSessionCacheRefresh parameter to true: Other settings (e.g. --dbpath, etc.) remain the same. B. Build the Index¶ Connect directly to the mongod instance running as a standalone on the new port and create the new index for this instance. For example, connect a mongo shell to the instance, and use the createIndex() to create an ascending index on the username field of the records collection: C. Restart the Program mongod as a Replica Set Member¶ When the index build completes, shutdown the mongod instance. Undo the configuration changes made when starting as a standalone to return the its original configuration and restart as a member of the replica set. Important Be sure to remove the disableLogicalSessionCacheRefresh parameter. For example, to restart your replica set member: - Configuration File - Command-line Options If you are using a configuration file: - Revert to the original port number. - Uncomment the replication.replSetName. - Remove parameter disableLogicalSessionCacheRefreshin the setParametersection. For example: Other settings (e.g. storage.dbPath, etc.) remain the same. And restart: Allow replication to catch up on this member. D. Repeat the Procedure for the Remaining Secondaries¶ Once the member catches up with the other members of the set, repeat the procedure one member at a time for the remaining secondary members: E.. - A. Stop One Secondary and Restart as a Standalone - B. Build the Index - C. Restart the Program mongod as a Replica Set Member
https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/
2020-10-20T03:59:45
CC-MAIN-2020-45
1603107869785.9
[]
docs.mongodb.com
Tasks in Dispatch API Tasks refer to actions that are be completed by the Dispatch API. In terms of ridehail, it can refer to trips from one location to another while for delivery services, it can refer to packages that need to be taken from Point A to Point B. There are 4 steps a task needs to go through to be completed. These steps are: - Drive to the pick-up location - Pick up the resource - Drive to the drop-off location - Drop off the resource The current state of the task can be fetched from the GetTask endpoint of the Task API. A task can have 3 states: - In progress means that a task has steps that need to be completed. If the assignmentfield is not set, it means that a vehicle has not yet been assigned to the task. We can refer to the states of the steps to see how far along the task is in being completed. - Completed means that a task has been completed. The time of completion is also returned in the response. - Cancelled means that a task has been cancelled. The source of cancellation and its description is returned in the response. Along with the states of the task, the GetTask endpoint also returns the states of all the steps. The state is defined by the boolean field completed which tells us whether the step has been completed or not. Along with the state, we also return the timestamp of when the step was completed. If it has not been completed yet, we return the expected time of completion which can be used to show ETAs to the users. You can use this cURL command to fetch the current state of the task and steps: curl --request POST \ --header "Content-Type: application/json" \ --header "X-Api-Key: $RIDEOS_API_KEY" \ --data '{"taskId": "task-1"}'
https://docs.rideos.ai/dispatch-tasks/
2020-10-20T02:42:07
CC-MAIN-2020-45
1603107869785.9
[]
docs.rideos.ai
Node View T-HFND-010-002 In the Node view, you can connect effects and compositing nodes to form a network, also known as a node system. This view is very useful for rigging puppets, creating advanced effects and having a clear view of complex scenes. The organization and order of the nodes determines the flow of data during the compositing process and how your animation elements will be composited. Do one of the following: - From the top menu, select Windows > Node View. - From any of the other views, click the Add View button and select Node.
https://docs.toonboom.com/help/harmony-14/premium/reference/view/node-view.html
2020-10-20T03:45:59
CC-MAIN-2020-45
1603107869785.9
[array(['../../Resources/Images/HAR/Stage/Interface/HAR12/HAR12_node_view.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
CreateInstanceConfigurationBase¶ - class oci.core.models. CreateInstanceConfigurationBase(**kwargs)¶ Bases: object Creation details for an instance configuration. Attributes Methods SOURCE_INSTANCE= 'INSTANCE'¶ A constant which can be used with the source property of a CreateInstanceConfigurationBase. This constant has a value of “INSTANCE” SOURCE_NONE= 'NONE'¶ A constant which can be used with the source property of a CreateInstanceConfigurationBase. This constant has a value of “NONE” __init__(**kwargs)¶ Initializes a new CreateInstanceConfigurationBase”
https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/api/core/models/oci.core.models.CreateInstanceConfigurationBase.html
2020-10-20T03:16:59
CC-MAIN-2020-45
1603107869785.9
[]
oracle-cloud-infrastructure-python-sdk.readthedocs.io
Start Here¶ Is deltascope right for you?¶ deltascope may be able to help you if: - your data consists of a set of 3D image stacks - your data contains a clear structure and shape that has consistent gross morphology between control and experimental samples - you want to identify extreme or subtle differences in the structure between your experimental groups - you have up to 4 different channels to compare deltascope cannot help if: - your data was collected using cryosections that need to be aligned after imaging - your experiment changes the gross anatomy of the structure between control and experimental samples Installation¶ deltascope can be installed using pip, Python’s package installer. Some deltascope dependencies are not available through pip, so we recommend that you install Anaconda which automatically includes and installs the remaining dependencies. See Setting up a Python environment for more details. $ pip install deltascope Note If you are unfamiliar with the command line, check out this tutorial. Additional resources regarding the command line are available here. Warning Packages required for deltascope depend on Visual C++ Build Tools, which can be downloaded at build tools. Setting up a Python environment¶ If you’re new to scientific computing with Python, we recommend that you install Anaconda to manage your Python installation. Anaconda is a framework for scientific computing with Python that will install important packages (numpy, scipy, and matplotlib). Warning deltascope is written in Python 3, and requires the installation the Python 3 version of Anaconda.
https://deltascope.readthedocs.io/en/latest/start-here.html
2020-10-20T03:24:10
CC-MAIN-2020-45
1603107869785.9
[]
deltascope.readthedocs.io