content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
“pool_gameobject” table¶
The
pool_gameobject table holds game objects that are tied to a
specific pool.
This table can only contain game objects that have a type of GAMEOBJECT_TYPE_CHEST, GAMEOBJECT_TYPE_GOOBER, GAMEOBJECT_TYPE_FISHINGHOLE.
guid¶
This references the “gameobject” table tables unique ID for which the entry is valid.
pool_entry¶
The identifier of a pool template to which this game object belongs. The value has to match with a pool identifier defined in the “pool_template” table.
chance¶
The explicit percentage chance that this game object will be spawned.
If the pool spawns just one game object (max_limit = 1 in the respective pool_template), the core selects the game object to be spawned in a two-step process:
First, only the explicitly-chanced (chance > 0) game objects of the pool are rolled. If this roll does not produce any game object, all the game objects without explicit chance (chance = 0) are rolled with equal chance.
If the pool spawns more than one game object, the chance is ignored and all the game objects in the pool are rolled in one step with equal chance.
In case the pool spawns just one game object and all the game objects have a non-zero chance, the sum of the chances for all the game objects must equal to 100, otherwise the pool won’t be spawned. | http://docs.getmangos.com/en/latest/database/world/pool-gameobject.html | 2017-03-23T06:21:16 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.getmangos.com |
with the help of Git,after a fresh checkout. This will generate some critically needed files.. | http://pyscaffold.readthedocs.io/en/v2.5.7/contrib.html | 2017-03-23T06:06:37 | CC-MAIN-2017-13 | 1490218186780.20 | [] | pyscaffold.readthedocs.io |
General Information
We collect the e-mail addresses and names of those who sign in to forums with their accounts on social networks (Facebook and Google+). Storage
Customer Case uses third party vendors and hosting partners to provide the necessary hardware, software, networking, storage, and related technology required to run Customer Case. Although StiltSoft companyy.
- Heroku - this Cloud service is used for hosting and running Customer Case application.. | https://docs.stiltsoft.com/display/public/CustomerCase/Privacy+Policy | 2017-03-23T06:12:26 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.stiltsoft.com |
block of pixel colors.
This function takes a Color32 texture formats.
For other formats
SetPixels32 is ignored.
The texture also has to have Is Readable flag set in the import settings.
Using
SetPixels32 is faster than calling SetPixels.
See Also: SetPixels, GetPixels32, GetPixels, Apply, LoadRawTextureData, mipmapCount.
// This script will tint texture's mip levels in different colors // (1st level red, 2nd green, 3rd blue). You can use it to see // which mip levels are actually used and how.
function Start () { var rend = GetComponent.<Renderer>();
// duplicate the original texture and assign to the material var texture : Texture2D = Instantiate(rend.material.mainTexture); rend.material.mainTexture = texture;
// colors used to tint the first 3 mip levels var colors = new Color32[3]; colors[0] = Color.red; colors[1] = Color.green; colors[2] = Color.blue; var mipCount = Mathf.Min( 3, texture.mipmapCount );
// tint each mip level for( var mip = 0; mip < mipCount; ++mip ) { var cols = texture.GetPixels32( mip ); for( var i = 0; i < cols.Length; ++i ) { cols[i] = Color32.Lerp( cols[i], colors[mip], 0.33 ); } texture.SetPixels32( cols, mip ); }
// actually apply all SetPixels32, don't recalculate mip levels texture.Apply( false ); }
Set a block of pixel colors.
This function is an extended version of
SetPixels32 above; it does not modify the whole
mip level but modifies only
blockWidth by
blockHeight region starting at x,y.
The
colors array must be blockWidth*blockHeight size, and the modified block
must fit into the used mip level. | https://docs.unity3d.com/ScriptReference/Texture2D.SetPixels32.html | 2017-03-23T06:17:49 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.unity3d.com |
Using the AWS SDKs, CLI, and Explorers
Topics
You can use the AWS SDKs when developing applications with Amazon S3. The AWS SDKs simplify your programming tasks by wrapping the underlying REST API. Mobile SDKs are also available for building connected mobile applications using AWS. This section provides an overview of using AWS SDKs for developing Amazon S3 applications. This section also describes how you can test the AWS SDK code samples provided in this guide. (CLI) to manage Amazon S3 buckets and objects.
AWS Toolkit for Eclipse
The AWS Toolkit for Eclipse includes both the AWS SDK for Java and AWS Explorer for Eclipse. The AWS Explorer for Eclipse is an open source plug-in for Eclipse for Java IDE that makes it easier for developers to develop, debug, and deploy Java applications using AWS. The easy to use GUI interface enables you to access and administer your AWS infrastructure including Amazon S3. You can perform common operations such as manage your buckets and objects, set set up instructions, go to Setting Up the AWS Toolkit for Visual Studio. For examples of using Amazon S3 using the explorer, go to Using Amazon S3 from AWS Explorer.
AWS SDKs
You can download only the SDKs. For information about downloading the SDK libraries, go to Sample Code Libraries.
AWS CLI
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services, including Amazon S3. For information about downloading the AWS CLI, go to AWS Command Line Interface.
Specifying Signature Version in Request Authentication
In the Asia Pacific (Mumbai), Asia Pacific (Seoul), EU (Frankfurt) and China (Beijing) regions, Amazon S3 supports only Signature Version 4. In all other regions, Amazon S3 supports both Signature Version 4 and Signature Version 2.
For all AWS regions, AWS SDKs use Signature Version 4 by default to authenticate requests. When using AWS SDKs that were released before May 2016, you may be required to request Signature Version 4 as shown in the following table: | http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html | 2017-03-23T06:18:12 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.aws.amazon.com |
Changelog¶
0.7.1 (2014-04-28)¶
Add public directive to mark the asset as public:
//= public
It can be used as an alternative to Environment‘s public_assets param.
0.7 (2014-04-23)¶
django-gears package has documentation now (thanks to Preston Timmons).
require directive supports globbing now. If several assets are found, all are required in alphabetical order. If nothing found matching pattern, FileNotFound exception is raised.
Thus, require_directory app and require_tree app can be replaced with require app/* and require app/** respectively.
depend_on directive also supports globbing.
The information about registered search paths is available through the paths property of the Environment class. Search paths can be useful for compilers to resolve internal dependencies.
Add params directive to set asset parameters. Asset parameters can be used to change behavior of plugins for the current asset. For example, this can be used to disable top-level function wrapper in gears-coffeescript compiler:
//= params coffeescript=bare
Allow Gears plugins to inject themselves to the environment. See register_entry_points() docs.
Manifest file can be disabled by setting manifest_path parameter in Environment to False (thanks to Will Bond).
Fix Python 3 compatibility (thanks to Yaoda Liu).
0.6.1 (2013-09-08)¶
- Add ability to disable asset fingerprinting. This can be useful, if you want to compile assets for humans.
0.6 (2013-04-28)¶
- Add processor to add missing semicolons to the end of JavaScript sources.
- Add gzip support.
- Add support for cache busting. This is done through fingerprinting public assets.
- Fix unknown extensions handling. Thanks @xobb1t for the report.
- Fix cssmin and slimit compressors.
0.5 (2012-10-16)¶
Support for Python 3.2 was added (Thanks to Artem Gluvchynsky).
Note
SlimIt and cssmin compressors don’t support Python 3 yet. But you can use compressors from gears-uglifyjs and gears-clean-css packages instead.
0.4 (2012-09-23)¶.
0.3 (2012-06-24)¶
- Added depend_on directive. It is useful when you need to specify files that affect an asset, but not to include them into bundled asset or to include them using compilers. E.g., if you use @import functionality in some CSS pre-processors (Less or Stylus).
- Main extensions (.js or .css) can be omitted now in asset file names. E.g., you can rename application.js.coffee asset to application.coffee.
- Asset requirements are restricted by MIME type now, not by extension. E.g., you can require Handlebars templates or JavaScript assets from CoffeeScript now.
- Added file-based cache.
- Environment cache is pluggable now.
- Fixed cache usage in assets.
0.2 (2012-02-18)¶
- Fix require_directory directive, so it handles removed/renamed/added assets correctly. Now it adds required directory to asset’s dependencies set.
- Added asset dependencies. They are not included to asset’s bundled source, but if dependency is expired, then asset is expired. Any file of directory can be a dependency.
- Cache is now asset agnostic, so other parts of Gears are able to use it.
- Added support for SlimIt as JavaScript compressor.
- Added support for cssmin as CSS compressor.
- Refactored compressors, compilers and processors. They are all subclasses of BaseAssetHandler now.
- Added config for Travis CI.
- Added some docs.
- Added more tests. | http://gears.readthedocs.io/en/latest/changelog.html | 2017-03-23T06:06:41 | CC-MAIN-2017-13 | 1490218186780.20 | [] | gears.readthedocs.io |
IT is performing an update on storage.rice.edu that will briefly interrupt access to personal and departmental storage spaces late Friday night, December 7. The 11:30PM maintenance window lasts one hour, but each storage space will be affected for only a short time.
While the rolling reboot is occurring, you may lose access to your storage folders for a few minutes. Related services like web sites and applications will also be affected for brief intervals during this maintenance.
Updates will be posted to the IT web site: | https://docs.rice.edu/confluence/display/PUBLIC/2012/12/05/storage+update+Friday+night | 2017-03-23T06:13:43 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.rice.edu |
You can change the background color of your talks. To do that, clickin a talk and select one of color options.
The coloring that was used in Talk versions earlier than 3.3.0 is preserved as the default option ().
You can find your own ways to use the coloring capability for the benefit of your team. Here are some ideas – use coloring to:
- prioritize discussions
- assign colors to teams to distinguish discussions associated with each team (in case several teams work on the same document)
- bring some fun into your work | https://docs.stiltsoft.com/display/Talk/Coloring+talks | 2017-03-23T06:16:26 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.stiltsoft.com |
Difference between revisions of "JButton::fetchButton"::fetchButton
Description
Get the button.
Description:JButton::fetchButton [Edit Descripton]
public function fetchButton ()
- Returns
- Defined on line 115 of libraries/joomla/html/toolbar/button.php
See also
JButton::fetchButton source code on BitBucket
Class JButton
Subpackage Html
- Other versions of JButton::fetchButton
SeeAlso:JButton::fetchButton [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JButton::fetchButton&diff=92174&oldid=89508 | 2015-05-22T12:09:23 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
Information for "Module Class Suffix/fr" Basic information Display titleSuffixe de Classe de Module Default sort keyModule Class Suffix/fr Page length (in bytes)1,019 Page ID32516 creatorSandra97 (Talk | contribs) Date of page creation14:23, 5 March 2014 Latest editorSandra97 (Talk | contribs) Date of latest edit14:45, 5 March 2014 Total number of edits10 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (3)Templates used on this page: Template:JVer (view source) (semi-protected)Template:Version (view source) Chunk:Module Class Suffix/fr (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Module_Class_Suffix/fr&action=info | 2015-05-22T12:52:03 | CC-MAIN-2015-22 | 1432207924991.22 | [] | docs.joomla.org |
Returns a GeoPoint object whose coordinates are not initialized.
Namespace: DevExpress.Xpf.Map
Assembly:
DevExpress.Xpf.Map.v21.2.dll
public static GeoPoint Empty { get; }
Public Shared ReadOnly Property Empty As GeoPoint
An empty GeoPoint.
No
We appreciate your feedback and continued support.
May we contact you if we need to discuss your feedback in greater detail or update you on changes to this help topic? | https://docs.devexpress.com/WPF/DevExpress.Xpf.Map.GeoPoint.Empty | 2021-11-27T06:32:23 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.devexpress.com |
The Specification Explorer is where new specifications are created from projects, and existing ones viewed and modified.
The Specification list can be filtered by using the filter at the top of the list.
Please see the topic How To Use Filters for more advanced filtering information.
To begin a new specification follow the steps below:
When only a single project is available to specify, that does not have a project image associated (see How To: Display A Project Image On The New Specification Dialog).
The Project list can be filtered by entering the criteria in the filter box.
Please see the topic How To Use Filters for more advanced filtering information.
Once a new specification is started, navigate to the end of the user forms using the navigation buttons (bottom of form). The appearance and position of the buttons does depend if the default navigation and specification flow are in use.
The following navigation buttons will appear when the default Specification Flow is in use:
By default each specification will display a section named Tasks. This is where the Form Messages appear. Only when all tasks have been completed will you be able to navigate to the next form, finish or release the specification.
A specification can have all rules and results analyzed when running a specification.
Start a new specification and click the Test Mode button.
See Info: Keyboard Shortcuts for more shortcut tips.
Kiosk Mode gives DriveWorks User the ability to run a Specification full screen without any DriveWorks UI (header bar, command bar, task list and notification area).
To enable Kiosk Mode please see Kiosk Mode (for User).
Once enabled Kiosk Mode is activated by:
To return to normal running mode press the Esc key, on the keyboard.
Once you have finished, released or saved a specification it will appear in the Specification Explorer.
To view a report or document for a specification:
Documents created for the specification will be displayed in the Documents list on the right of the Specification Explorer.
If a document is not listed it may be set to be hidden.
Check the setting Show hidden files when viewing specifications to display all documents.
The Documents list can be filtered by using the filter at the top of the list.
Please see the topic How To Use Filters for more advanced filtering information.
Right Click Context Menu
Right click on any document in the list to Show In Explorer
This will launch Windows Explorer at the location of the selected document.
Reports created each time the specification is processed are displayed in the Reports section, under the Documents list.
Place the report and DriveWorks side by side to see the results in the report alongside the rule that was applied in DriveWorks.
See How To: Diagnose Project Issues Using DriveWorks Specification Report for more information.
Model Reports created during generation of any models for that specification are displayed under the Reports list.
Place the report and DriveWorks side by side to see the results in the report alongside the rule that was applied in DriveWorks.
See How To: Diagnose Project Issues Using DriveWorks Generation Report for more information.
When a specification is in a paused state (see Specification Flow) it could have operations and transitions applied that can perform a further action on the specification.
Operation and Transition buttons appear on the Command Bar of the Specification Explorer.
The default Specification Flow displays the following buttons:
Specifications, in the Specification Explorer, can be multi-selected by selecting the first specification and then:
or by
Applicable transitions and operations can be invoked on multiple specifications.
When invoking an action on multiple specifications the following dialog will be displayed:
The available options are: | https://docs.driveworkspro.com/Topic/AutopilotSpecificationExplorer | 2021-11-27T06:13:34 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.driveworkspro.com |
Exactly-once, At-least-once or At-most-once Execution
Hazelcast, as an AP product, does not provide the exactly-once guarantee. In general, Hazelcast tends to be an at-least-once solution.
In the following failure case, exactly-once guarantee can be broken: When the target member of a pending invocation leaves the cluster while the invocation is waiting for a response, that invocation is re-submitted to its new target due to the new partition table. It can be that, it has already been executed on the leaving member and backup updates are propagated to the backup replicas, but the response is not received by the caller. If that happens, the operation will be executed twice.
In the following failure case, invocation state becomes indeterminate:). | https://docs.hazelcast.com/hazelcast/latest/consistency-and-replication/exactly-once-execution | 2021-11-27T05:45:20 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.hazelcast.com |
Azure Resource Manager deployment modes
When deploying your resources, you specify that the deployment is either an incremental update or a complete update. The difference between these two modes is how Resource Manager handles existing resources in the resource group that aren't in the template..
The default mode is incremental.
Complete mode
In complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified in the template.
Note
Always use the what-if operation before deploying a template in complete mode. What-if shows you which resources will be created, deleted, or modified. Use what-if to avoid unintentionally deleting resources.. Resources in the template are added to the resource group.
Note
When redeploying an existing resource in incremental mode, all properties are reapplied. The properties aren't incrementally added. A common misunderstanding is to think properties that aren't specified in the template are left unchanged. If you don't specify certain properties, Resource Manager interprets the deployment as overwriting those values. Properties that aren't included in the template are reset to the default values. Specify all non-default values for the resource, not just the ones you're updating. The resource definition in the template always contains the final state of the resource. It can't represent a partial update to an existing resource.
In rare cases, properties that you specify for a resource are actually implemented as a child resource. For example, when you provide site configuration values for a web app, those values are implemented in the child resource type
Microsoft.Web/sites/config. If you redeploy the web app and specify an empty object for the site configuration values, the child resource isn't updated. However, if you provide new site configuration values, the child resource type is updated. deployment group create \ --mode Complete \ --name ExampleDeployment \ --resource-group ExampleResourceGroup \ --template-file storage.json
The following example shows a linked template set to incremental deployment mode:
"resources": [ { "type": "Microsoft.Resources/deployments", "apiVersion": "2020-10-01", "name": "linkedTemplate", "properties": { "mode": "Incremental", <nested-template-or-external-template> } } ]
Next steps
- To learn about creating Resource Manager templates, see Understand the structure and syntax of ARM templates.
- To learn about deploying resources, see Deploy resources with ARM templates and Azure PowerShell.
- To view the operations for a resource provider, see Azure REST API. | https://docs.microsoft.com/bs-cyrl-ba/azure/azure-resource-manager/templates/deployment-modes | 2021-11-27T06:25:40 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Customising the Auspice Client¶
Auspice allows you to customise the appearance and functionality of Auspice when the client is built. This is how Auspice running locally and nextstrain.org look different, despite both using “Auspice”.
Notice the difference? Default Auspice (left) and nextstrain.org’s customised version (right)
This is achieved by providing a JSON at build time to Auspice which defines the desired customisations via:
auspice build --extend <JSON>
Here’s the file used by nextstrain.org to change the appearance of Auspice in the above image.
See the client customisation API for the available options.
AGPL Source Code Requirement¶
Auspice is distributed under AGPL 3.0. Any modifications made to the auspice source code, including build-time customisations as described here, must be made publicly available. We ask that the “Powered by Nextstrain” text and link, rendered below the data visualisations, be maintained in all customised versions of auspice, in keeping with the spirit of scientific citations.
Pages available | https://docs.nextstrain.org/projects/auspice/en/v2.22.0/customise-client/index.html | 2021-11-27T05:48:02 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../_images/auspice-vs-nextstrain.png',
'../_images/auspice-vs-nextstrain.png'], dtype=object)] | docs.nextstrain.org |
The REM Plugin SDK is a framework for developing custom plugins for REM. It provides a template for creating new plugins, templates for generating new management panels and monitoring extensions, and also some documented examples with provided code.
The REM Plugin SDK does not include the GWT SDK. You must download GWT 2.7.0 (direct link), extract it, then run script
copy-gwt-libraries.sh from the REM Plugin SDK package.
Topics
Other documentation for the Rhino Element Manager, including the changelog, links to downloads, and the API documentation can be found on the REM product page. | https://docs.rhino.metaswitch.com/ocdoc/books/rem/3.0.0/rem-sdk-guide/index.html | 2021-11-27T05:13:31 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rhino.metaswitch.com |
User activities
User activities combine web categories, file types, and URL groups in one container. You can include user activities in policies to control access to websites or files that match any of the criteria specified.
- To edit a user activity, click Edit
.
- To clone a user activity, click Edit
.
Combining categories and URL groups in a user activity
The following user activity combines the spyware and malware web category with a group of URLs that are known to be unsafe.
The user activity is added to the default policy. If the traffic matches either the spyware and malware category or any of the URLs specified, the user activity applies and the traffic is blocked.
| https://docs.sophos.com/nsg/sophos-firewall/18.5/Help/en-us/webhelp/onlinehelp/AdministratorHelp/Web/UserActivities/index.html | 2021-11-27T04:53:24 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../../../images/WebUserActivity.png',
'Combining categories and URL groups in a user activity'],
dtype=object)
array(['../../../images/WebPolicyExampleUserActivity.png',
'Adding user activity to the default policy'], dtype=object)] | docs.sophos.com |
Configure KPI monitoring calculations in ITSI
KPI monitoring calculations determine how and when ITSI performs statistical calculations on the KPI. They also determine how ITSI displays gaps in your data. For an overview of the entire KPI creation workflow, see Overview of creating KPIs in ITSI.
Configure the following KPI monitoring calculations:
Next steps
After you define your source search, move on to step 4: Define KPI unit and monitoring lag in ITSI.
Adjust the stateful KPIs caching period
Each time the saved search runs for a KPI with Fill Data Gaps with set to
Last available value, ITSI caches the alert value for the KPI in the itsi_kpi_summary_cache KV store collection. A lookup called itsi_kpi_alert_value_cache in the KPI saved search fills entity-level and service-aggregate gaps for the KPI using the cached alert value.
ITSI fills data gaps with the last reported value for at most 30 to 45 minutes, in accordance with the default modular input interval and retention time (15 minutes + 30 minutes). If data gaps for a KPI continue to occur for more than 45 minutes, the data gaps appear as N/A values.
To prevent bloating of the collection with entity and service-aggregate KPI results, a retention policy runs on the itsi_kpi_summary_cache collection using a Splunk modular input. The modular input runs every 15 minutes and removes the entries that have not been updated for more than 30 minutes.
You can change the stateful KPI caching frequency or retention time.
Prerequisites
- Only users with file system access, such as system administrators, can change the stateful KPI caching frequency and retention time.
- Review the steps in How to edit a configuration file in the Admin Manual.
Never change or copy the configuration files in the default directory. The files in the default directory must remain intact and in their original location.
Steps
- Open or create a local
inputs.conffile for the ITSI app at
$SPLUNK_HOME/etc/apps/SA-ITOA/local.
- Under the
[itsi_age_kpi_alert_value_cache://age_kpi_alert_value_cache]stanza, adjust the
intervaland
retentionTimeInSecsettings.! | https://docs.splunk.com/Documentation/ITSI/4.10.1/SI/KPImonitoring | 2021-11-27T05:35:34 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
C API: Pools¶
Frequently allocating and deallocating memory can be quite costly, especially when you are making large allocations or allocating on different memory resources. To mitigate this, Umpire provides allocation strategies that can be used to customize how data is obtained from the system.
In this example, we will look at creating a pool that can fulfill requests for
allocations of any size. To create a new
umpire_allocator using the pooling
algorithm:
The two arguments are the size of the initial block that is allocated, and the
minimum size of any future blocks. We have to provide a new name for the
allocator, as well as the underlying
umpire_allocator we wish to use to
grab memory.
Once you have the allocator, you can allocate and deallocate memory as before, without needing to worry about the underlying algorithm used for the allocations:
This pool can be created with any valid underlying
umpire_allocator. | https://umpire.readthedocs.io/en/v6.0.0/tutorial/c/pools.html | 2021-11-27T05:20:15 | CC-MAIN-2021-49 | 1637964358118.13 | [] | umpire.readthedocs.io |
MCEnderTeleportEvent
Link to mcenderteleportevent
Event for when an Enderman/Shulker teleports or an ender pearl is used.
The event is cancelable.
If the event is canceled, the ender teleport won't happen.
The event does not have a result.
Diese Klasse importieren
Link to diese-klasse-importieren
It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import at the very top of the file.
ZenScriptCopy
import crafttweaker.api.event.living.MCEnderTeleportEvent;
Extending MCLivingEvent
Link to extending-mclivingevent
MCEnderTeleportEvent extends MCLivingEvent. That means all methods available in MCLivingEvent are also available in MCEnderTeleportEvent
Properties
Link to properties | https://docs.blamejared.com/1.16/de/vanilla/api/event/entity/living/MCEnderTeleportEvent | 2021-11-27T06:25:36 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.blamejared.com |
Date: Sun, 9 Nov 2014 03:50:11 +0100 From: Polytropon <[email protected]> To: "T. Michael Sommers" <[email protected]> Cc: FreeBSD Questions <[email protected]> Subject: Re: Where do user files go these days? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Sat, 08 Nov 2014 21:37:31 -0500, T. Michael Sommers wrote: > I've noticed that neither the instructions for partitioning a disk in > the handbook, nor hier(7), mention a /home partition. Is such a > partition still used? If not, where do user files go? It _can_ be used. Traditionally, /home is a symlink to /usr/home, so if you create partitions according to OS functionality, the users' data will be stored on the /usr partition. But you are completely free to create a dedicated /home partition - on the same disk or even on a different disk; if you put every- thing into one big partition, this will also work. The installer will automatically create the symlink as /home@ -> /usr/home for you. Just make sure that /home exists and is either the correct mount point or a symlink to the actual location (for example /home@ -> /export/home, where /export is the mountpoint for a "shared disk"). Basically, you can create _any_ partitions you like and add a mountpoint for them; /home is not an exception, it's just a "special case" as its presence is expected by many user-run programs. You can configure those things as you like. Here is an example (trimmed): % mount /dev/ad4s1a on / (ufs, local) /dev/ad4s1d on /tmp (ufs, local, soft-updates) /dev/ad4s1e on /var (ufs, local, soft-updates) /dev/ad4s1f on /usr (ufs, local, soft-updates) /dev/ad4s1g on /opt (ufs, local, soft-updates) /dev/ad6 on /home (ufs, local, soft-updates) Similarly, /home could have been /dev/ad4s1f, or even part of /dev/ads1e (which is /usr). -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ...
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=29932+0+/usr/local/www/mailindex/archive/2014/freebsd-questions/20141116.freebsd-questions | 2021-11-27T06:39:06 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.freebsd.org |
Date: Wed, 4 Oct 2006 08:30:23 -0400 From: Bill Moran <[email protected]> To: perikillo <[email protected]> Cc: FreeBSD Mailing List <[email protected]> Subject: Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
In response to perikillo <[email protected]>: [snip] > Right now my first backup again crash > > xl0: watchdog timeout > > Right now i change the cable from on port to another and see what happends. > > Guy, please someone has something to tell me, this is critical for me. > > This is my second NIC. Don't know if this is related or not, but it may be: -- Bill Moran Collaborative Fusion Inc.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=734898+0+archive/2006/freebsd-questions/20061008.freebsd-questions | 2021-11-27T06:40:42 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.freebsd.org |
Adding Batching to a Custom Source
To improve the throughput of your custom sources, you can add batching to reads.
Defining a Source
Here is how you create a source that reads lines of text from a file:
public class Sources { public static BatchSource<String> buildLineSource() { return SourceBuilder .batch("line-source", x -> new BufferedReader(new FileReader("lines.txt"))) .<String>fillBufferFn((in, buf) -> { String line = in.readLine(); if (line != null) { buf.add(line); } else { buf.close(); } }) .destroyFn(buf -> buf.close()) .build(); } }
Now it is ready to be used from the pipeline, just like a built-in source:
Pipeline p = Pipeline.create(); p.readFrom(Sources.buildLineSource()) .writeTo(Sinks.logger());
Adding Batching
While this simple source functions correctly, it’s not optimal because
it reads just one line and returns. There are some overheads to
repeatedly calling
fillBufferFn and you can add more than one item to
the buffer:
SourceBuilder .batch("line-source", x -> new BufferedReader(new FileReader("lines.txt"))) .<String>fillBufferFn((in, buf) -> { for (int i = 0; i < 128; i++) { String line = in.readLine(); if (line == null) { buf.close(); return; } buf.add(line); } }) .destroyFn(buf -> buf.close()) .build();
This code adds 128 lines at a time. In your specific case you should
use the rule of thumb to target about 1 millisecond spent in a single
invocation of
fillBufferFn. | https://docs.hazelcast.com/hazelcast/latest/pipelines/custom-batch-source | 2021-11-27T04:51:25 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.hazelcast.com |
Testing Overview
Testing is important to make sure that changes to your code are working as expected. Learn how to test Hazelcast applications before you go into production.
Hazelcast offers the following testing tools:
Unit tests: For testing classes, components or modules used by your software. Unit tests are in general quite cheap to automate and can be run very quickly by a continuous integration server.
Simulator: For testing potential production problems such as real-life failures, network problems, overloaded CPU, and failing members. | https://docs.hazelcast.com/hazelcast/latest/test/testing-apps | 2021-11-27T06:16:47 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.hazelcast.com |
The default Azure Information Protection policy
Applies to: Azure Information Protection
Relevant for: Azure Information Protection classic client for Windows. For the unified labeling client, see Learn about sensitivity labels from the Microsoft 365 documentation.
Note
To provide a unified and streamlined customer experience, the Azure Information Protection classic client and Label Management in the Azure Portal are deprecated as of March 31, 2021. No further support is provided for the classic client, and maintenance versions will no longer be released.
The classic client will be officially retired, and will stop functioning, on March 31, 2022.
The content in this article is provided to support customers with extended support only. All current Azure Information Protection classic client customers must migrate to the Microsoft Information Protection unified labeling platform and upgrade to the unified labeling client. Learn more in our migration blog.
Use the following information to understand how the default policy for Azure Information Protection is configured.
When an administrator first connects to the Azure Information Protection service by using the Azure portal, the Azure Information Protection default policy for that tenant is created. Occasionally, Microsoft might make changes to this default policy but if you were already using the service before the default policy was revised, your earlier version of the Azure Information Protection default policy is not updated because you might have configured it and deployed into production.
You can reference the following values to return your Azure Information Protection policy to the defaults, or update your Azure Information Protection policy to the latest values.
Important
Starting April 2019, the default labels are not automatically created for new customers. These tenants are automatically provisioned for the unified labeling platform, so there is no need to migrate labels after you have configured them in the Azure portal.
For these tenants, if there aren't any sensitivity labels already created in the Microsoft 365 compliance center, you can create the default labels from the current default policy for Azure Information Protection. To do this, select Generate default labels from the Labels pane, and add the labels to the global policy. If you don't see the option to generate default labels, you might need to first activate unified labeling from the Manage > Unified labeling pane. For detailed instructions, see the Get started with Azure Information Protection in the Azure portal quickstart.
Current default policy
This version of the Azure Information Protection default policy is from July 31, 2017.
This Azure Information Protection default policy is created when the Azure Rights Management service is activated, which is the case for new tenants starting February 2018. For more information, see the blog post announcement Improvements to the protection stack in Azure Information Protection.
This Azure Information Protection default policy is also created if you have manually activated the service before the Azure Information Protection policy was created.
If the service was not activated, the Azure Information Protection default policy does not configure protection for the following sublabels:
Confidential \ All Employees
Confidential \ Recipients Only
Highly Confidential \ All Employees
Highly Confidential \ Recipients Only
When these sublabels are not automatically configured for protection, the Azure Information Protection default policy remains the same as the previous default policy.
When protection is applied to the All Employees sublabels, the protection is configured by using the default templates that are automatically converted to labels in the Azure portal. For more information about these templates, see Configuring and managing templates for Azure Information Protection.
Starting August 30, 2017, this version of the Azure Information Protection default policy includes multi-language versions of the label names and descriptions.
More information about the Recipients Only sublabel
Users see this label in Outlook only. They do not see this label in Word, Excel, PowerPoint, or from File Explorer.
When users select this label, the Outlook Do Not Forward option is automatically applied to the email. The recipients that the users specify cannot forward the email and cannot copy or print the contents, or save any attachments.
Labels
Sublabels
Footnote 1
The protection permissions match those in the default template, Confidential \ All Employees.
Footnote 2
The protection permissions match those in the default template, Highly Confidential \ All Employees.
Footnote 3
This feature is currently in PREVIEW. The Azure Preview Supplemental Terms include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Information Protection bar
Settings
Some of the settings were added after July 31, 2017.
Default policy before July 31, 2017
Note that descriptions in this policy refer to data that requires protection, and also to data tracking and revoking. The policy does not configure this protection for these labels, so you must take additional steps to fulfill this description. For example, configure the label to apply protection or use a data loss prevention (DLP) solution. Before you can track and revoke a document by using the document tracking site, the document must be protected by the Azure Rights Management service and tracked by the person who protected the document.
Labels
Sublabels
Information Protection bar
Settings
Default policy before March 21, 2017
Labels
Sublabels
Information Protection bar
Settings
Next steps
For more information about configuring your Azure Information Protection policy, use the links in the Configuring your organization's policy section. | https://docs.microsoft.com/en-us/azure/information-protection/configure-policy-default | 2021-11-27T06:34:18 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Server API¶
Auspice client requests¶
The Auspice server handles requests to 3 API endpoints made by the Auspice client:
/charon/getAvailable(returns a list of available datasets and narratives)
/charon/getDataset(returns the requested dataset)
/charon/getNarrative(returns the requested narrative)
/charon/getAvailable¶
URL query arguments:
prefix(optional) - the pathname of the requesting page in Auspice. The
getAvailablehandler can use this to respond according appropriately. Unused by the default Auspice handler.
JSON Response (on success):
{ "datasets": [ { "request": "[required] The pathname of a valid dataset. \ Will become the prefix of the getDataset request.", "buildUrl": "[optional] A URL to display in the sidebar representing \ the build used to generate this analysis.", "secondTreeOptions": "[optional] A list of requests which should \ appear as potential second-trees in the sidebar dropdown" }, ... ], "narratives": [ {"request": "URL of a narrative. Will become the prefix in a getNarrative request"}, ... ] }
Failure to return a valid JSON will result in a warning notification shown in Auspice.
/charon/getDataset¶
URL query arguments:
prefix(required) - the pathname of the requesting page in Auspice. Use this to determine which dataset to return.
type(optional) – if specified, then the request is for an additional file (e.g. “tip-frequencies”), not the main dataset.
JSON Response (on success):
The JSON response depends on the file-type being requested.
If the type is not specified, i.e. we’re requesting the “main” dataset JSON then see this JSON schema. Note that the Auspice client cannot process v1 (meta / tree) JSONs – see below for how to convert these.
Alternative file type reponses are to be documented.
Alternative responses:
A
204 reponse will cause Auspice to show its splash page listing the available datasets & narratives.
Any other non-200 reponse behaves similarly but also displays a large “error” message indicating that the dataset was not valid.
/charon/getNarrative¶
URL query arguments:
prefix(required) - the pathname of the requesting page in Auspice. Use this to determine which narrative to return.
type(optional) - the format of the data returned (see below for more information). Current valid values are “json” and “md”. If no type is specified the server will use “json” as a default (for backwards compatibility reasons). Requests to this API from the Auspice client are made with `type=md.
Response (on success):
The response depends on the
type specified in the query.
If a markdown format is requested, then the narrative file is sent to the client unmodified to be parsed on the client.
If a JSON is requested then the narrative file is parsed into JSON format by the server.
For Auspice versions prior to v2.18 this was the only expected behavior.
The transformation from markdown (i.e. the narrative file itself) to JSON is via the
parseNarrativeFile() function (see below for how this is exported from Auspice for use in other servers).
Here, roughly, is the code we use in the auspice server for this transformation:
const fileContents = fs.readFileSync(pathName, 'utf8'); if (type === "json") { const blocks = parseNarrative(fileContents); res.send(JSON.stringify(blocks).replace(/</g, '\\u003c')); }
While the Auspice client (from v2.18 onwards) always requests the
type=md, it will attempt to parse the response as JSON if markdown parsing fails, in an effort to remain backwards compatable with servers which may be using an earlier API.
Suppling custom handlers to the Auspice server¶
The provided Auspice servers – i.e.
auspice view and
auspice develop both have a
--handlers <JS> option which allows you to define your own handlers.
The provided JavaScript file must export three functions, each of which handles one of the GET requests described above and must respond accordingly (see above for details).
For information about the
req and
res arguments see the express documentation for the request object and response object, respectively.
You can see nextstrain.org’s implementation of these handlers here.
Here’s a pseudocode example of an implementation for the
getAvailable handler which may help understanding:
const getAvailable = (req, res) => { try { /* collect available data */ res.json(data); } catch (err) { const errorMessage = `error message to display in client`; console.log(errorMessage); /* printed by the server, not the client */ return res.status(500).type("text/plain").send(errorMessage); } };
Importing code from Auspice¶
The servers included in Auspice contain lots of useful code which you may want to use to either write your own handlers or entire servers. For instance, the code to convert v1 dataset JSONs to v2 JSONs (which the client requires) can be imported into your code so you don’t have to reinvent the wheel!
Currently
const auspice = require("auspice");
returns an object with two properties:
convertFromV1¶
Signature:
const v2json = convertFromV1({tree, meta})
where
tree is the v1 tree JSON, and
meta the v1 meta JSON.
Returns:
An object representing the v2 JSON defined by this schema.
parseNarrativeFile¶
Signature:
const blocks = parseNarrativeFile(fileContents);
where
fileContents is a string representation of the narrative Markdown file.
Returns:
An array of objects, each entry representing a different narrative “block” or “page”. Each object has properties
__html– the HTML to render in the sidebar to form the narrative
dataset– the dataset associated with this block
query– the query associated with this block | https://docs.nextstrain.org/projects/auspice/en/v2.31.0/server/api.html | 2021-11-27T06:31:37 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.nextstrain.org |
When using
LIKER the default weighting of the terms has two factors.
The first is based on the words location in the query, with the most
important term first. The second is based on word frequency in the
document set, where common words have a lesser importance attached
to them. The logic operators (+ and -) remove the second factor.
Additionally the not operator (-) negates the first factor. | https://docs.thunderstone.com/site/texisman/combinatorial_logic_and_liker.html | 2021-11-27T06:22:42 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.thunderstone.com |
These Vortex command-line options are in effect when running a SQL
statement from the
texis command-line (the
-s option).
-cFormat one field per line in the output. Normally all fields from a row are output on the same line.
-hDo not print column headings in the output.
-l rowsLimit output to the given number of
rows.
-mCreate the database named with
-d.
-vVerbose: display inserted/updated/deleted rows also; by default only rows from
selectstatements are shown.
-w widthMake field headings
widthcharacters wide. A value of 0 means align to the longest field name. | https://docs.thunderstone.com/site/vortexman/vortex_command_line_sql.html | 2021-11-27T05:09:45 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.thunderstone.com |
Fuel Transfers
Many fleets have a small tanker truck that travels the lot and refuels vehicles. Since posting fuel to this vehicle would throw off fuel usage and MPG reports, a routine is provided so you can make a transfer directly from a tank to a tanker unit, posting the fuel to the pump and not to the tanker vehicle. The tanker unit acts as a secondary tank and should be assigned a unique tank and pump number for posting purposes. To transfer fuel to a tanker vehicle, do the following:
- Select Fuel > Pump/Tank Proc > Pump to Tank Transfer from the RTA main menu (FPP).
- Read the message displayed and choose OK to continue.
- Enter a pump number or press F1 to select a pump from the lookup list.
- Enter the tank number that represents the tanker vehicle or press F1 to select a tank from the lookup list.
- Enter the number of gallons to transfer.
- Enter the transfer date or press F1 to select a date from the calendar. | https://docs.rtafleet.com/rta-manual/fuel-inventory/fuel-transfers/ | 2021-11-27T04:55:07 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rtafleet.com |
Bind Host (<host> Parameter)
Host parameter selects the IP-address to which this VSP will "bind". LINUX servers often have several network interfaces (with each interface having its own IP-address) and it may be necessary to specify which interface this particular VSP will work on.
This parameter must not be omitted. Defining Host="" makes the VSP bind to the "general" bind IP-address. | https://docs.tibbo.com/soism/vspl_port_bind_host | 2021-11-27T05:19:10 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.tibbo.com |
How can I get a URL to the data products that I've found?
If you are using the VSO Web Interface, please follow the instructions in HowToDownload, and make sure that you select a data transfer method that begins with 'URL'.
If you are connecting through the VSO API, you should read the documentation on the GetData procedure, and send the appropriate method. You can always send the method URL, and the data provider will respond with the URL sub-types that it supports. You may also send a list of methods, so that you can attempt to avoid a second round trip.
For instance, you might send the following values in the method array:
- URL-TAR_GZ
- URL-ZIP
- URL-FILE
- URL
This tells the you would most prefer a URL to a gziped tarball, but would also accept a URL to a zip file or a URL to individual files. If the data provider doesn't support it, it should report back with what URL subtypes it supports. If it doesn't support any URL types, it will respond back with a list of its supported transfer methods. | https://docs.virtualsolar.org/wiki/HowToGetUrl | 2021-11-27T04:51:05 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.virtualsolar.org |
FilesFiles
In the 8base GraphQL API,
Files is just another table that supports all standard CRUD operations and connections to other tables. When you create a field of type
File, the platform creates a relationship (connection) between your table and the
Files table under the hood. This allows you to use connection-related operations such as
create,
connect,
disconnect on file-type fields.
To handle delivery and transformations on file uploads in the 8base Management Console, we've integrated with Filestack. S3 is then used to safely store the uploaded file. Thus, inside the Data Viewer you're able to easily manage files (pictures and documents) as they are attached to different records.
Managing FilesManaging Files
UploadUpload
Inside the Data Viewer (
Data > Table Name > Data) you're able to manage all records for the selected data table. When creating or editing a record, the
Add <File Type> option will appear next to any pertaining data field. Using this option will launch the Filestack uploader, allowing you the option of uploading different files through a number of connected channels.
In this same view, you are able to remove any file from a given record. Simply use the ellipsis dropdown located on the image and select "Delete". Make sure to save your changes before leaving the screen.
| https://docs.8base.com/docs/8base-console/handling-files/ | 2021-11-27T06:23:26 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['/assets/static/data-viewer-upload.60cafca.fc364550dcb87dd7e147f2928d6ba5a1.png',
'Data Viewer uploader with connected channels'], dtype=object)
array(['/assets/static/data-viewer-file-delete.b6af8ad.c4d99c39da585464b2aead9e9eb17dae.png',
'Delete files from a specific record'], dtype=object) ] | docs.8base.com |
Date: Sat, 27 Nov 2021 04:55:40 +0000 (UTC) Message-ID: <1483850277.2258.1637988940428@[23.23.85.115]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_2257_1180537038.1637988940405" ------=_Part_2257_1180537038.1637988940405 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
July 8, 2020
This release includes the beta version of Arnold GPU
Important information about Arnold GPU (beta)
Arnold, KtoA, and other downloads are available here. Installation instructions come with KtoA, but can also be = viewed here: Installation.=
Linux: 418.56 or higher
Windows: 419.77 or higher
#493 Arnold crashes with varying per-point width attributes on = curve location | https://docs.arnoldrenderer.com/exportword?pageId=119113729 | 2021-11-27T04:55:40 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.arnoldrenderer.com |
Architecture
There are four main components in Zeebe's architecture: the client, the gateway, the broker, and the exporter.
#Client
Clients are libraries that you embed in an application (e.g. a microservice that executes your business logic) to connect to a Zeebe cluster. Clients have two primary uses:
- Carrying out business logic (starting workflow instances, publishing messages, working on tasks)
- Handling operational issues (updating workflow instance variables, resolving incidents)
More about Zeebe clients:
- Clients connect to the Zeebe gateway via gRPC, which uses http/2-based transport. To learn more about gRPC in Zeebe, check out the gRPC section of the docs.
- The Zeebe project includes officially-supported Java and Go clients, and gRPC makes it possible to generate clients in a range of different programming languages. Community clients have been created in other languages, including C#, Ruby, and JavaScript.
- Client applications can be scaled up and down completely separately from Zeebe--the Zeebe brokers do not execute any business logic.
#Gateway
The gateway, which proxies requests to brokers, serves as a single entry point to a Zeebe cluster.
The gateway is stateless and sessionless, and gateways can be added as necessary for load balancing and high availability.
#Broker
The Zeebe broker is the distributed workflow engine that keeps state of active workflow instances.
Brokers can be partitioned for horizontal scalability and replicated for fault tolerance. A Zeebe deployment will often consist of more than one broker.
It's important to note that no application business logic lives in the broker. Its only responsibilities are:
Storing and managing the state of active workflow instances
Distributing work items to clients
Brokers form a peer-to-peer network in which there is no single point of failure. This is possible because all brokers perform the same kind of tasks and the responsibilities of an unavailable broker are transparently reassigned in the network.
#Exporter
The exporter system provides an event stream of state changes within Zeebe. This data has many potential uses, including but not limited to:
Monitoring the current state of running workflow instances
Analysis of historic workflow data for auditing, business intelligence, etc
Tracking incidents created by Zeebe
The exporter includes a simple API that you can use to stream data into a storage system of your choice. Zeebe includes an out-of-the-box Elasticsearch exporter, and other community-contributed exporters are also available. | https://docs.camunda.io/docs/0.25/components/zeebe/basics/architecture/ | 2021-11-27T04:38:37 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['/assets/images/zeebe-architecture-67c608106ddc1c9eaa686a5a268887f9.png',
'zeebe-architecture'], dtype=object) ] | docs.camunda.io |
DirectAccess
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016
You can use this topic for a brief overview of DirectAccess, including the server and client operating systems that support DirectAccess, and for links to additional DirectAccess documentation for Windows Server 2016.
Note
In addition to this topic, the following DirectAccess documentation is available.
- DirectAccess Deployment Paths in Windows Server
- Prerequisites for Deploying DirectAccess
- DirectAccess Unsupported Configurations
- DirectAccess Test Lab Guides
- DirectAccess Known Issues
- DirectAccess Capacity Planning
- DirectAccess Offline Domain Join
- Troubleshooting DirectAccess
- Deploy a Single DirectAccess Server Using the Getting Started Wizard
- Deploy a Single DirectAccess Server with Advanced Settings
- Add DirectAccess to an Existing Remote Access (VPN) Deployment
DirectAccess allows connectivity for remote users to organization network resources without the need for traditional Virtual Private Network (VPN) connections. With DirectAccess connections, remote client computers are always connected to your organization - there is no need for remote users to start and stop connections, as is required with VPN connections. In addition, your IT administrators can manage DirectAccess client computers whenever they are running and Internet connected..
DirectAccess provides support only for domain-joined clients that include operating system support for DirectAccess.
The following server operating systems support DirectAccess.
You can deploy all versions of Windows Server 2016 as a DirectAccess client or a DirectAccess server.
You can deploy all versions of Windows Server 2012 R2 as a DirectAccess client or a DirectAccess server.
You can deploy all versions of Windows Server 2012 as a DirectAccess client or a DirectAccess server.
You can deploy all versions of Windows Server 2008 R2 as a DirectAccess client or a DirectAccess server.
The following client operating systems support DirectAccess.
Windows 10 Enterprise
Windows 10 Enterprise 2015 Long Term Servicing Branch (LTSB)
Windows 8 and 8.1 Enterprise
Windows 7 Ultimate
Windows 7 Enterprise | https://docs.microsoft.com/en-us/windows-server/remote/remote-access/directaccess/directaccess | 2021-11-27T04:45:04 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Facilities
A facility in the RTA system is treated as a separate company or shop. Data is stored separately for each facility, and, in most cases costs cannot be combined. For example, if your company has multiple shops located in various cities and you want to individually track each location, you'd need to set up facility records for each shop. This would allow each facility to track its own vehicles, parts, tires, etc. separately as if each location had purchased its own copy of the RTA system. Various reports could be generated for each facility providing cost, productivity, inventory, and other vital information.
When using multiple facilities, you'll have the ability to share vehicles, parts, and various data from other facilities. For example, suppose your company has two locations and a facility has been set up for each location. A vehicle that belongs to facility 00001 travels out of the vicinity and needs to have repair work done at facility 00002. Facility 00002 will be able to access the vehicle, history, and other needed information from facility 00001 in order to complete the repairs. All the costs and records would be updated in the appropriate facilities. Vehicle costs and history would be updated in the vehicle's facility (00001); work order, part inventory, and mechanic records would be updated in the facility where the work occurred (00002). To use multiple facilities, set the "Use Multiple Facility" switch in Main System Parameters (SSM, switch 26) to YES.
You can restrict facility access by using the System Security feature (SUM). Once implemented, users can be set up with rights allowing access to one facility, a group of facilities in their region(s), or all facilities.
Adding a Facility
To add a facility record, do the following:
- Select Master > Facility > File Maintenance from the RTA main menu (MFM).
- Enter a facility number and choose Add. The facility number is a numeric field allowing up to five digits.
- Enter the facility information.
- Save the record.
Facility Record Field Descriptions: Main Window
- Facility Number: (Numeric field) Enter up to five digits for the facility number.
- Abbreviation: Enter a facility abbreviation.
- Status: This field displays the facility status: Active or Inactive. No entry can be made to this field. When initially adding a facility, the status defaults to ACTIVE. Facilities cannot be deleted from the system, but they can be changed to an INACTIVE status if no longer needed.
- Cross Reference: This field displays the cross-reference number for this facility. RTA uses this field internally; no entry can be made.
- Home State: This is the state where the facility is located. This information is used as a default state if a state code is not indicated when fuel transactions are entered or uploaded into this facility.
- Report Header: This is the header that prints on work orders, purchase orders, and various reports.
Facility Record Field Descriptions: General Tab
- Facility Defaults: Enter the default facility number where the customer, vendor, tire manufacturer, and tire capper records are located that this facility will be using. In most instances, the records will come from the same facility being added. However, there may be times when a satellite facility has the same customer or vendor base as another facility. Specifying the other facility here eliminates having to add duplicate records in multiple facilities. The facility number entered here is used as a default and can be overridden at any time.
NOTE: Once the facility record has been saved, if you wish to change the vendor default facility, select Utilities > Chg-Vendor from the menu or click on the Change Facility icon next to the Default Vendor field. Follow the on-screen prompts to run a renumber utility program. The system will then update part cross-references, requisitions, and open purchase orders with the new default facility.
- WO Defaults: Default vendor number, reason code, priority code, and shop ID can be specified here. These values will be used as the defaults when new work orders are created but can be changed as needed on a per work order basis.
- Overhead Percentages: Overhead charges can be added to all work orders to help cover the cost of unbilled shop expenses such as shop cleanup, and rags. If you wish to apply overhead by facility rather than by customer, set the "Overhead Fac/Cust flag" switch to FACILITY (SSM, switch 34) and then enter the markup percentages to apply to part, labor, tire, and miscellaneous costs posted on work orders. You are not required to enter markups in all the fields. Refer to "Applying Overhead or Shop Costs" for more information.
- Import/Export: Select the Import/Export checkbox if you are using the Import/Export add-on option. Contact our sales department for more information about this option.
- Tax Rate: If a tax rate is entered here, it can be used to override the vendor's specified tax rate in purchase orders. Normally, the taxable items on a purchase order will use the vendor's specified tax rate, since sales tax will vary by city. When entering items on a purchase order, you may alter the tax rate used to the one specified here, on a per-item basis if desired.
Facility Record Field Descriptions: Address Tab
- Billing Information: Enter the facility name, address, phone number, and main contact person. This information is printed on work orders and purchase orders.
- Shipping Information: Enter the shipping address and information or choose Copy to use the billing information. This is printed on purchase orders.
Facility Record: Period Costs Tab
The Period Costs tab contains costs accrued in this facility. This information is not required when adding a new facility. Period costs are automatically updated as work order costs are posted.
Changing a Facility
To change a facility record, do the following:
- Select Master > Facility > File Maintenance from the RTA main menu (MFM).
- Enter a facility number or press F1 to select a facility from the lookup list.
- Make the changes as needed.
- Save the record.
Renumbering a Facility
In order to renumber a facility, the status must be ACTIVE. To renumber a facility record, do the following:
- Select Master > Facility > File Maintenance from the RTA main menu (MFM).
- Enter a facility number or press F1 to select a facility from the lookup list.
- Select Utilities > Renumber from the menu or click on the Renumber Facility icon in the toolbar.
- Enter the new facility number.
Changing a Facility Status
Facility records cannot be deleted from the system. If a shop or location is closed, the facility status can be changed from ACTIVE to INACTIVE. Doing so causes the facility record and data (work orders, parts, fuel transactions, etc.) to be inaccessible. In the event the data is later needed, the facility status can be changed back to an ACTIVE status. To activate or deactivate a facility record, do the following:
- Select Master > Facility > File Maintenance from the RTA main menu (MFM).
- Enter a facility number or press F1 to select a facility from the lookup list.
- To deactivate, select Utilities > De-Activate from the menu or click on the De-Activate Facility icon in the toolbar. OR
To reactivate, select Utilities > Re-Activate from the menu or click on the Re-Activate Facility icon in the toolbar.
- Choose Yes to confirm the activation. | https://docs.rtafleet.com/rta-manual/getting-started/facilities/ | 2021-11-27T05:56:11 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rtafleet.com |
29. References
Aberson, S.D., 1998: Five-day Tropical cyclone track forecasts in the North
Atlantic Basin. Weather & Forecasting, 13, 1005-1015.
Ahijevych, D., E. Gilleland, B.G. Brown, and E.E. Ebert, 2009. Application of
spatial verification methods to idealized and NWP-gridded precipitation forecasts.
Weather Forecast., 24 (6), 1485 - 1497, doi: 10.1175/2009WAF2222298.1.
Barker, T. W., 1991: The relationship between spread and forecast error in
extended-range forecasts. Journal of Climate, 4, 733-742.
Bradley, A.A., S.S. Schwartz, and T. Hashino, 2008: Sampling Uncertainty
and Confidence Intervals for the Brier Score and Brier Skill Score.
Weather and Forecasting, 23, 992-1006.
Brill, K. F., and F. Mesinger, 2009: Applying a general analytic method
for assessing bias sensitivity to bias-adjusted threat and equitable
threat scores. Weather and Forecasting, 24, 1748-1754.
Buizza, R., 1997: Potential forecast skill of ensemble prediction and spread
and skill distributions of the ECMWF ensemble prediction system. Monthly
Weather Review,125, 99-119.
Bullock, R., T. Fowler, and B. Brown, 2016: Method for Object-Based
Diagnostic Evaluation. NCAR Technical Note NCAR/TN-532+STR, 66 pp.
Candille, G., and O. Talagrand, 2008: Impact of observational error on the
validation of ensemble prediction systems. Quarterly Journal of the Royal
Meteorological Society 134: 959-971.
Casati, B., G. Ross, and D. Stephenson, 2004: A new intensity-scale approach
for the verification of spatial precipitation forecasts. Meteorological
Applications 11, 141-154.
Davis, C.A., B.G. Brown, and R.G. Bullock, 2006a: Object-based verification
of precipitation forecasts, Part I: Methodology and application to
mesoscale rain areas. Monthly Weather Review, 134, 1772-1784.
Davis, C.A., B.G. Brown, and R.G. Bullock, 2006b: Object-based verification
of precipitation forecasts, Part II: Application to convective rain systems.
Monthly Weather Review, 134, 1785-1795.
Dawid, A.P., 1984: Statistical theory: The prequential approach. Journal of
the Royal Statistical Society A147, 278-292.
Ebert, E.E., 2008: Fuzzy verification of high-resolution gridded forecasts:
a review and proposed framework. Meteorological Applications, 15, 51-64.
Eckel, F. A., M.S. Allen, M. C. Sittel, 2012: Estimation of Ambiguity in
Ensemble Forecasts. Weather Forecasting, 27, 50-69.
Efron, B. 2007: Correlation and large-scale significance testing. Journal
of the American Statistical Association,* 102(477), 93-103.
Epstein, E. S., 1969: A scoring system for probability forecasts of ranked categories.
J. Appl. Meteor., 8, 985-987, 10.1175/1520-0450(1969)008<0985:ASSFPF>2.0.CO;2.
Gilleland, E., 2010: Confidence intervals for forecast verification.
NCAR Technical Note NCAR/TN-479+STR, 71pp.
Gilleland, E., 2017. A new characterization in the spatial verification
framework for false alarms, misses, and overall patterns.
Weather Forecast., 32 (1), 187 - 198, doi: 10.1175/WAF-D-16-0134.1.
Gilleland, E., 2020. Bootstrap methods for statistical inference.
Part I: Comparative forecast verification for continuous variables.
Journal of Atmospheric and Oceanic Technology, 37 (11), 2117 - 2134,
doi: 10.1175/JTECH-D-20-0069.1.
Gilleland, E., 2020. Bootstrap methods for statistical inference.
Part II: Extreme-value analysis. Journal of Atmospheric and Oceanic
Technology, 37 (11), 2135 - 2144, doi: 10.1175/JTECH-D-20-0070.1.
Gilleland, E., 2021. Novel measures for summarizing high-resolution forecast
performance. Advances in Statistical Climatology, Meteorology and Oceanography,
7 (1), 13 - 34, doi: 10.5194/ascmo-7-13-2021.
Gneiting, T., A. Westveld, A. Raferty, and T. Goldman, 2004: Calibrated
Probabilistic Forecasting Using Ensemble Model Output Statistics and
Minimum CRPS Estimation. Technical Report no. 449, Department of
Statistics, University of Washington. Available at
Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble
forecasts. Monthly Weather Review, 129, 550-560.
Hersbach, H., 2000: Decomposition of the Continuous Ranked Probability Score
for Ensemble Prediction Systems. Weather and Forecasting, 15, 559-570.
Jolliffe, I.T., and D.B. Stephenson, 2012: Forecast verification. A
practitioner’s guide in atmospheric science. Wiley and Sons Ltd, 240 pp.
Knaff, J.A., M. DeMaria, C.R. Sampson, and J.M. Gross, 2003: Statistical,
Five-Day Tropical Cyclone Intensity Forecasts Derived from Climatology
and Persistence. Weather & Forecasting, Vol. 18 Issue 2, p. 80-92.
Mason, S. J., 2004: On Using “Climatology” as a Reference Strategy
in the Brier and Ranked Probability Skill Scores. Monthly Weather Review,
132, 1891-1895.
Mason, S. J., 2008: Understanding forecast verification statistics.
Meteor. Appl., 15, 31-40, doi: 10.1002/met.51.
Mittermaier, M., 2014: A strategy for verifying near-convection-resolving
model forecasts at observing sites. Weather Forecasting, 29, 185-204.
Mood, A. M., F. A. Graybill and D. C. Boes, 1974: Introduction to the
Theory of Statistics, McGraw-Hill, 299-338.
Murphy, A.H., 1969: On the ranked probability score. Journal of Applied
Meteorology and Climatology, 8 (6), 988 - 989,
doi: 10.1175/1520-0450(1969)008<0988:OTPS>2.0.CO;2.
Murphy, A.H., and R.L. Winkler, 1987: A general framework for forecast
verification. Monthly Weather Review, 115, 1330-1338.
Ou, M. H., Charles, M., & Collins, D. C. 2016: Sensitivity of calibrated week-2
probabilistic forecast skill to reforecast sampling of the NCEP global
ensemble forecast system. Weather and Forecasting, 31(4), 1093-1107.
Roberts, N.M., and H.W. Lean, 2008: Scale-selective verification of rainfall
accumulations from high-resolution forecasts of convective events.
Monthly Weather Review, 136, 78-97.
Saetra O., H. Hersbach, J-R Bidlot, D. Richardson, 2004: Effects of
observation errors on the statistics for ensemble spread and
reliability. Monthly Weather Review 132: 1487-1501.
Santos C. and A. Ghelli, 2012: Observational probability method to assess
ensemble precipitation forecasts. Quarterly Journal of the Royal
Meteorological Society 138: 209-221.
Schwartz C. and Sobash R., 2017: Generating Probabilistic Forecasts from
Convection-Allowing Ensembles Using Neighborhood Approaches: A Review
and Recommendations. Monthly Weather Review, 145, 3397-3418.
Stephenson, D.B., 2000: Use of the “Odds Ratio” for diagnosing
forecast skill. Weather and Forecasting, 15, 221-232.
Stephenson, D.B., B. Casati, C.A.T. Ferro, and C.A. Wilson, 2008: The extreme
dependency score: A non-vanishing measure for forecasts of rare events.
Meteorological Applications 15, 41-50.
Tödter, J. and B. Ahrens, 2012: Generalization of the Ignorance Score:
Continuous ranked version and its decomposition. Mon. Wea. Rev.,
140 (6), 2005 - 2017, doi: 10.1175/MWR-D-11-00266.1.
Weniger, M., F. Kapp, and P. Friederichs, 2016: Spatial Verification Using
Wavelet Transforms: A Review. Quarterly Journal of the Royal
Meteorological Society, 143, 120-136.
Wilks, D.S. 2010: Sampling distributions of the Brier score and Brier skill
score under serial dependence. Quarterly Journal of the Royal
Meteorological Society,, 136, 2109-2118. doi:10.1002/qj.709
Wilks, D., 2011: Statistical methods in the atmospheric sciences.
Elsevier, San Diego. | https://met.readthedocs.io/en/develop/Users_Guide/refs.html | 2021-11-27T04:53:55 | CC-MAIN-2021-49 | 1637964358118.13 | [] | met.readthedocs.io |
Using the WOPI protocol to integrate with Office for the web¶
You can use the Web Application Open Platform Interface (WOPI) protocol to integrate Office for the web with your application. The WOPI protocol enables Office for the web to access and change files that are stored in your service.
To integrate your application with Office for the web, you need to do the following:
Be a member of the Office 365 - Cloud Storage Partner Program. Currently integration with the Office for the web cloud service is available to cloud storage partners. You can learn more about the program, as well as how to apply, at.
Important
The Office 365 - Cloud Storage Partner Program is intended for independent software vendors whose business is cloud storage. It is not open to Office 365 customers directly.
Implement the WOPI protocol - a set of REST endpoints that expose information about the documents that you want to view or edit in Office for the web. The set of WOPI operations that must be supported is described in the section titled WOPI implementation requirements for Office for the web integration.
Read some XML from an Office for the web URL that provides information about the capabilities that Office for the web applications expose, and how to invoke them; this process is called WOPI discovery.
Provide an HTML page (or pages) that will host the Office for the web iframe. This is called the host page and is the page your users visit when they open or edit Office documents in Office for the web.
You can also optionally integrate your own UI elements with Office for the web. For example, when users choose Share in Office for the web, you can show your own sharing UI. These interaction points are described in the section titled Using PostMessage to interact with the Office for the web application iframe.
How to read this documentation¶
This documentation contains an immense amount of information about how to integrate with Office for the web, including details about how to implement the WOPI protocol, how Office for the web integration may be useful to you, and what capabilities it provides, you should read the following sections:
Integrating with Office for the web - A high level overview of the scenarios enabled by Office for the web integration, as well as a brief description of some of the key technical elements in a successful integration.
Using the WOPI protocol to integrate with Office for the web - A brief description of the technical pieces that you must implement to integrate with Office for the web. for the web. You should also read the following sections:
WOPI Validation application
Troubleshooting interactions with Office for the web
Office for the web:
-
Using PostMessage to interact with the Office for the web application iframe
WOPI discovery, specifically the WOPI actions section
Finally, if you are looking for more details about the process for shipping your integration, see the Shipping your Office for the web integration section. | https://wopi.readthedocs.io/en/latest/ | 2021-11-27T05:37:33 | CC-MAIN-2021-49 | 1637964358118.13 | [] | wopi.readthedocs.io |
Manage the Blacklist
From Genesys Documentation
This topic is part of the manual Genesys Recording, Quality Management, and Speech Analytics User's Guide for version Current of Genesys Recording, Quality Management, and Speech Analytics.
Contents
Use the Blacklist to manage a list of terms/phrases that should be ignored by the system when analyzing Trending data. The Blacklist applies to all users, views and languages.
Related documentation:
The Blacklist list is a collection of terms/phrases permanently ignored by the system when analyzing Trending data. The list affects all users, views and languages.
Manage the Blacklist
- Click the Blacklist button
. The following window opens.
- Click Add Term to add a term/phrase to the list.
- In the text field that appears type the term/phrase you want to add to the list and click Add.
- To remove a term/phrase from the blacklist, click Delete next to the term/phrase you want to remove from the list.
- Click Confirm to save your changes.
ImportantYou can add a term/phrase to the blacklist from the Trending Filter tooltip and you can configure the Trending page to ignore the blacklist using the Ignore Blacklist option, allowing the terms/phrases in the Blacklist to be included in the Trending chart. When this option is enabled, the phrases appear in the Trending view with a dark border. | https://all.docs.genesys.com/PEC-REC/Current/User/blacklist | 2021-11-27T06:04:49 | CC-MAIN-2021-49 | 1637964358118.13 | [] | all.docs.genesys.com |
Anveo Delivery App
With the Anveo Delivery App, you capture the entire delivery process digitally. The drivers get mobile access to all the delivery related information they need from Microsoft Dynamics 365 Business Central.
Please note, that this description only applies to the base configuration of the Anveo Delivery App. You might be using a customized version of the Anveo Delivery App.
Edit Start Location for Route
The Edit My Location Panel allows the driver to choose between a custom location or my location from where he will start the routing for his deliveries. If My Location is selected the start point of the route is the current location of the device based on the GPS signal of the device. After selecting a Custom Location the driver can define a custom address to start his routing from by entering information about address, post code and city.
Working with Deliveries
When selecting the Deliveries icon the list of orders to be delivered opens. The orders are sorted in the sequence they should be delivered in.
The sequence the orders are going to be delivered in is expected from Microsoft Dynamics 365 Business Central. The Anveo Delivery App is not doing route optimization.
Navigate to single delivery
By using the long press you can display the route to the shipping address of this particular order in Google Maps.
Display route to all deliveries
By using the Show Delivery Route action from the menu or pressing the map button you can navigate to all open deliveries in the list of open Delivery Orders. The sequence of delivery is expected to be predefined in Microsoft Dynamics 365 Business Central.
Delivery information from back office
If additional information regarding the delivery exist a red note is shown stating Delivery Information Available. Via a long press on the order, you can access this delivery information. The comments displayed here are the comments stored in the comments action on the Sales Order Page in Microsoft Dynamics 365 Business Central.
The Long action is a swipe to the left on iOS, a long press on Android and a right click on Windows.
Delivery not possible
If the delivery of the order is not possible you can cancel the delivery by using the Delivery not possible action on the long press of the order. After pressing the action, you have the possibility to place a comment regarding to why the delivery was impossible and then abort the delivery. The order will then disappear from your list of open deliveries and return to the backoffice to be reassigned.
Adapt Quantity shipped
You have the possibility to adapt the Quantity to Ship if you can’t deliver the full quantity of the item to be delivered. The reason for that could be, that some items are missing, damaged or broken. On the long press of the line to be delivered you have the possibility to store comments for the backoffice to describe, why the quantity has been adapted. The Add Pictures option also gives you the possibility to document damages or faults of the delivery with the camera of the device.
Return of goods by customer
In the menu of the “Delivery Page you have the possibility to capture returns such as pallets or other return packages via the action Capture Return Line . Since handling returns is a quite individual process also depending on what kind of returns are handled the Anveo Delivery App creates a note line to give you the possibility to capture the information about what has been returned. This information should then be handled by the backoffice.
Delivery to a GPS location
There might be cases, where you want to tag a specific GPS location of delivery. This might be the case, when delivering large items or delivering to a construction site. For this scenario you have the possibility to use the action “Save on the “Delivery. With this action the current GPS location of the device is stored with the delivery in longitude and latitude.
Please note, that this functionality can only be used if the Anveo Mobile App can access GPS information of the device. The accuracy of this information is depending on the GPS information available on the device.
Finish delivery
After successful delivery you have the possibility to capture the signature of the customer with the action “Capture on the “Delivery Page. This action can also be accessed via the clipboard button on the bottom of the page. From the menu you may create a “Shipment to display to the customer the delivery he is signing. This document can also be sent to the customer via e-mail.
By using the action Order Delivered you have the possibility to finish the delivery of this order. This action can also be accessed via the checkmark button on the bottom of the Page. You will rceive a warning, if no customer signature has been captured yet. After confirming the delivery the order will disappear from the list of open deliveries and flagged to the backoffice as delivered.
The Anveo Delivery App in Microsoft Dynamics 365 Business Central
The Anveo Delivery App is based on Sales Orders in Microsoft Dynamics 365 Business Central. Sales Orders should be assigned to the driver and routed from the backoffice. This can either be done manually or via a routing add-on for Microsoft Dynamics 365 Business Central. The driver is identified as a warehouse location in Microsoft Dynamics 365 Business Central.
The Anveo Delivery App may also be based on Posted Sales Documents. This adaptation to the app will have to be customized within a project.
Delivery cars and warehouse locations
The assignment of orders for delivery is done by assigning the Sales Order to a delivery vehicle represented by a warehouse location in Microsoft Dynamics 365 Business Central. Linking the delivery vehicle to a warehouse location will later give you the possibility to move items to this warehouse location for delivery and be able to view the stock of items to be delivered on a certain vehicle.
Processes around using the car as a warehouse location and moving stock will have to be customized within a project.
Schedule an order for delivery
To schedule an order for delivery the Delivery Vehicle Code has to be filled with the corresponding warehouse location, the Mobile App Status has to set to Assigned and a Delivery Sequence No. must be chosen. These values can be set manually or by an route optimization add-on.
Delivery related information from the Anveo Delivery App
When delivering the driver is filling the Quantity to Ship field on the Sales Lines. Return Lines will be created as Sales Lines of type blank. All the information related to the whole delivery such as signature and GPS Location are stored on the Sales Header or with relation to the Sales Header. Information related to an item to be delivered such as a picture of a damaged package are stored in relation to the Sales Lines.
The follwing informations from the Anveo Delivery App are stored in Microsoft Dynamics 365 Business Central:
- GPS Location – stored in Sales Header in fields ACF Delivery Latitude (Table 36 Field 5327204) and ACF Delivery Longitude (Table 36 Field 5327205)
- Signature – stored in Sales Header in field ACF Signature (Table 36 Field 5327200)
- Pictures – stored in table ACF File Repository (Table 5327186)
- Delivery Status – stored in Mobile App Status (Table 36 Field 5327202)
- Return Line – stored as Sales Line (Table 37) of type blank
- Quantity Shipped – stored in the Quantity to Ship (Table 37 Field 18)
Posting the delivery in Microsoft Dynamics 365 Business Central
After delivery the Anveo Delivery App sets the Mobile App Statusof the Sales Order to Delivered. The Sales Order can then be posted by the backoffice. If there are still items to be shipped the Sales Order may be reassigned. | https://docs.anveogroup.com/en/manual/anveo-mobile-app/base-app-features/anveo-delivery-app/?product_version=Version%209&product_platform=Microsoft%20Dynamics%20365%20Business%20Central&product_name=anveo-mobile-app | 2021-11-27T05:47:09 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.anveogroup.com |
Working with your favorite tools
Finally, you can take the results of your query and make visualizations from them or work with them in other tools by selecting Open in app:
The link takes you to a list of all of your appropriate third-party integrations and also provides a link to other available integrations: | https://docs.data.world/en/55156-55165-9--Working-with-your-favorite-tools.html | 2021-11-27T06:22:03 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.data.world |
What is it Good For?
As you probably already know, almost every serial device comes with its own proprietary communication software, optimized to its needs. The same is true also for the Device Server -- it comes with the Device Server Toolkit, ordinarily used to communicate with it. This, then, begs the question: What do we need HyperTerminal for?
Well, the answer is quite simple: Testing. HyperTerminal's strength lies in its simple interface. You just type your commands in, and watch the raw output on the screen. There aren't any buttons to click, or actions which are done without the user knowing it. This is very close to 'raw' communication -- just your input and the device's output, with no software to interpret it in the middle.
This lets you answer very quickly questions such as "what happens when I send..." -- and this, in turn, helps in the development of applications which will communicate with the Device Server directly. Before writing a whole routine in Visual Basic or another language just to send a specific command to the device, you can first send the command yourself, manually, and see what happens in real-time. Then you'll be able to write your code in full confidence that you're doing the correct thing.
Another common use for HyperTerminal is troubleshooting . HyperTerminal accesses the serial port in a very standard way. This means that it can be used when the proprietary application software for the serial device cannot open the Virtual Serial Port, or when communication fails in some other way. You can just run HyperTerminal and play with it, to see if the COM port is indeed opened, if communication reaches the other side of the line, if you get a reply, etc.
Such testing would help you decide if the problems you're having are related to software, hardware, network connectivity, etc. | https://docs.tibbo.com/soism/an008_why | 2021-11-27T06:16:53 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.tibbo.com |
Project Fact Sheet
Measure the impact of projects on applications and domains, and see their suitability with your business strategy, using the Project Fact Sheets and Reports in LeanIX.
As an enterprise architect, you may want to measure the impact of projects on applications and domains, in order to measure suitability with the corporate vision. This article shows you how to do that using the Project Fact Sheet.
What are Projects in LeanIX?
The Project Fact Sheet is one of our ten main fact sheet types, according to our data model. Projects are used to manage or build budgets, see project status and show project impact on application portfolio and their affected user groups.
In addition, orders from each provider can be managed to get a good overview of the budget status. Cost center and controlling reports very often show valuable information only after the fact events, so it is preferable to keep a “real-time” eye on where you spend your money.
How do I create and view Projects in LeanIX?
To create a new project:
- Go to Inventory and click New Fact Sheet in the right hand menu
- Select Project as Fact Sheet Type and enter a name for your project
- Complete Fact Sheet Information, Dependencies, Project Environment, Project Setup and Project Status
To view your existing projects:
Go to Inventory and select the Project Fact Sheet type.
What reports can I get on Projects?
In the Reporting section you have several Project Reports:
Project Portfolio
Your entry point into Project reporting, the Project Portfolio Report allows you to visualize and act based on Project Risk vs. Business Value.
Project Cost
This report lets you visualize Project Costs next to their corresponding lifecycles.
Project Landscape
This report gives you an overview on Project and their relations to Business Capabilities. Here you can get different insights depending on the View you select.
Project Matrix
Here you can dive even deeper into your Projects' data, by selecting two axes and a drilldown relation (i.e. Applications), in addition to several views available.
Once you have selected the desired View, go to Settings and tweak your report based on what you need to drill down on.
Project Roadmap
This view is very useful when you want to compare all your Projects' Lifecycles.
Updated almost 2 years ago | https://docs-eam.leanix.net/docs/project-portfolio-management-1 | 2021-11-27T06:10:14 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://files.readme.io/6a8ef17-ProjectFS1.png', 'ProjectFS1.png'],
dtype=object)
array(['https://files.readme.io/6a8ef17-ProjectFS1.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/eecaf46-ProjectFS2.png', 'ProjectFS2.png'],
dtype=object)
array(['https://files.readme.io/eecaf46-ProjectFS2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/5a940d2-ProjectFS3.png', 'ProjectFS3.png'],
dtype=object)
array(['https://files.readme.io/5a940d2-ProjectFS3.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/c1e5098-ProjectFS4.png', 'ProjectFS4.png'],
dtype=object)
array(['https://files.readme.io/c1e5098-ProjectFS4.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/9a579e5-ProjectFS5.png', 'ProjectFS5.png'],
dtype=object)
array(['https://files.readme.io/9a579e5-ProjectFS5.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ce0c253-ProjectFS6.png', 'ProjectFS6.png'],
dtype=object)
array(['https://files.readme.io/ce0c253-ProjectFS6.png',
'Click to close...'], dtype=object) ] | docs-eam.leanix.net |
4. Building Executable Programs with GNAT¶
This chapter describes first the gnatmake tool (Building with gnatmake), which automatically determines the set of sources needed by an Ada compilation unit and executes the necessary (re)compilations, binding and linking. It also explains how to use each tool individually: the compiler (gcc, see Compiling with gcc), binder (gnatbind, see Binding with gnatbind), and linker (gnatlink, see Linking with gnatlink) to build executable programs. Finally, this chapter provides examples of how to make use of the general GNU make mechanism in a GNAT context (see Using the GNU make Utility)..
4.1. Building with
gnatmake¶
A typical development cycle when working on an Ada program consists of the following steps:
Edit some sources to fix bugs;
Add enhancements;
Compile all sources affected;
Rebind and relink; and
Test.
The third step in particular..
4.1.1. Running
gnatmake¶ sent to
stderr. The output produced by the
-M switch is sent to
stdout.
4.1.2. Switches for
gnatmake¶
You may specify any of the following switches to
gnatmake:
--version
Display Copyright and version, then exit disregarding all other options.
If
--versionwas not used, display usage, then exit disregarding all other options.
--GCC=compiler_name
Program used for compiling. The default is
gcc. You need to use quotes around
compiler_nameif
compiler_namecontains spaces or other separator characters. As an example
--GCC="foo -x -y"will instruct
gnatmaketo use
foo -x -yas your compiler. A limitation of this syntax is that the name and path name of the executable itself must not include any embedded spaces. Note that switch
-cis always inserted after your command name. Thus in the above example the compiler command that will be used by
gnatmakewill be
foo -c -x -y. If several
--GCC=compiler_nameare used, only the last
compiler_nameis taken into account. However, all the additional switches are also taken into account. Thus,
--GCC="foo -x -y" --GCC="bar -z -t"is equivalent to
--GCC="bar -x -y -z -t".
--GNATBIND=binder_name
Program used for binding. The default is
gnatbind. You need to use quotes around
binder_nameif
binder_namecontains spaces or other separator characters. As an example
--GNATBIND="bar -x -y"will instruct
gnatmaketo use
bar -x -yas your binder. Binder switches that are normally appended by
gnatmaketo
gnatbindare now appended to the end of
bar -x -y. A limitation of this syntax is that the name and path name of the executable itself must not include any embedded spaces.
--GNATLINK=linker_name
Program used for linking. The default is
gnatlink. You need to use quotes around
linker_nameif
linker_namecontains spaces or other separator characters. As an example
--GNATLINK="lan -x -y"will instruct
gnatmaketo use
lan -x -yas your linker. Linker switches that are normally appended by
gnatmaketo
gnatlinkare now appended to the end of
lan -x -y. A limitation of this syntax is that the name and path name of the executable itself must not include any embedded spaces.
--create-map-file
When linking an executable, create a map file. The name of the map file has the same name as the executable with extension “.map”.
--create-map-file=mapfile
When linking an executable, create a map file with the specified name.
--create-missing-dirs
When using project files (
-Pproject), automatically create missing object directories, library directories and exec directories.
--single-compile-per-obj-dir
Disallow simultaneous compilations in the same object directory when project files are used.
--subdirs=subdir
Actual object directory of each project file is the subdirectory subdir of the object directory specified or defaulted in the project file.
--unchecked-shared-lib-imports
By default, shared library projects are not allowed to import static library projects. When this switch is used on the command line, this restriction is relaxed.
--source-info=source info file
Specify a source info file. This switch is active only when project files are used. If the source info file is specified as a relative path, then it is relative to the object directory of the main project. If the source info file does not exist, then after the Project Manager has successfully parsed and processed the project files and found the sources, it creates the source info file. If the source info file already exists and can be read successfully, then the Project Manager will get all the needed information about the sources from the source info file and will not look for them. This reduces the time to process the project files, especially when looking for sources that take a long time. If the source info file exists but cannot be parsed successfully, the Project Manager will attempt to recreate it. If the Project Manager fails to create the source info file, a message is issued, but gnatmake does not fail.
gnatmake“trusts” the source info file. This means that if the source files have changed (addition, deletion, moving to a different source directory), then the source info file need to be deleted and recreated.
-a
Consider all files in the make process, even the GNAT internal system files (for example, the predefined Ada library files), as well as any locked files. Locked files are files whose ALI file is write-protected. By default,is also useful in conjunction with
-fif you need to recompile an entire application, including run-time files, using special configuration pragmas, such as a
Normalize_Scalarspragma.
By default
gnatmake -acompiles all GNAT internal files with
gcc -c -gnatpgrather than
gcc -c.
-b
Bind only. Can be combined with
-cto do compilation and binding, but no link. Can be combined with
-lto do binding and linking. When not combined with
-call the units in the closure of the main program must have been previously compiled and must be up to date. The root unit specified by
file_namemay be given without extension, with the source extension or, if no GNAT Project File is specified, with the ALI file extension.
-c
Compile only. Do not perform binding, except when
-bis also specified. Do not perform linking, except if both
-band
-lare also specified. If the root unit specified by
file_nameis not a main unit, this is the default. Otherwise
gnatmakewill attempt binding and linking unless all objects are up to date and the executable is more recent than the objects.
-Pis used, a mapping file is always used, so
-Cis unnecessary; in this case the mapping file is initially populated based on the project file. If
-Cis used without
-P, the mapping file is initially empty. Each invocation of the compiler will add any newly accessed sources to the mapping file.
).
-d
Display progress for each source, up to date or not, as a single line:
completed x out of y (zz%)
If the file needs to be compiled this is displayed after the invocation of the compiler. These lines are displayed even in quiet output mode.
-D dir
Put all object files and ALI file in directory
dir. If the
-Dswitch is not used, all object files and ALI files go in the current working directory.
This switch cannot be used when using a project file.
-eInnn
Indicates that the main source is a multi-unit source and the rank of the unit in the source file is nnn. nnn needs to be a positive number and a valid index in the source. This switch cannot be used when
gnatmakeis invoked for several mains.
-eL
Follow all symbolic links when processing project files. This should be used if your project uses symbolic links for files or directories, but is not needed in other cases.
This also assumes that no directory matches the naming scheme for files (for instance that you do not have a directory called “sources.ads” when using the default GNAT naming scheme).
When you do not have to use this switch (i.e., by default), gnatmake is able to save a lot of system calls (several per source file and object file), which can result in a significant speed up to load and manipulate a project file, especially when using source files from a remote system.
-eS
Output the commands for the compiler, the binder and the linker on standard output, instead of standard error.
-f
Force recompilations. Recompile all sources, even though some object files may be up to date, but don’t recompile predefined or GNAT internal files or locked files (files with a write-protected ALI file), unless the
-aswitch is also specified.
-F
When using project files, if some errors or warnings are detected during parsing and verbose mode is not in effect (no use of switch -v), then error lines start with the full path name of the project file, rather than its simple file name.
-g
Enable debugging. This switch is simply passed to the compiler and to the linker.
-i
In normal mode,
gnatmakecompiles all object files and ALI files into the current directory. If the
-iswitch (see.
-jn
Use
nprocesses to carry out the (re)compilations. On a multiprocessor machine compilations will occur in parallel. If
nis 0, then the maximum number of parallel compilations is the number of core processors on the platform. In the event of compilation errors, messages from various compilations might get interspersed (but
gnatmakewill give you the full ordered list of failing compiles at the end). If this is problematic, rerun the make process with n set to 1 to get a clean list of messages.
-k
Keep going. Continue as much as possible after a compilation error. To ease the programmer’s task in case of compilation errors, the list of sources for which the compile fails is given when
gnatmaketerminates.
If
gnatmakeis invoked with several
file_namesand with this switch, if there are compilation errors when building an executable,
gnatmakewill not attempt to build the following executables.
-l
Link only. Can be combined with
-bto binding and linking. Linking will not be performed if combined with
-cbut not with
-b. When not combined with
-ball the units in the closure of the main program must have been previously compiled and must be up to date, and the main program needs to have been bound. The root unit specified by
file_namemay be given without extension, with the source extension or, if no GNAT Project File is specified, with the ALI file extension.
-m
Specify that the minimum necessary amount of recompilations be performed. In this modemakenot to recompile files that depend on it (provided other sources on which these files depend have undergone no semantic modifications). Note that the debugging information may be out of date with respect to the sources if the
-mswitch causes a compilation to be switched, so the use of this switch represents a trade-off between compilation time and accurate debugging information.
-M
Check if all objects are up to date. If they are, output the object dependencesirin the following list) are never reported.
-n.
-o exec_name
Output executable name. The name of the final executable program will be
exec_name. If the
-oswitch is omitted the default name for the executable will be the name of the input file in appropriate form for an executable file on the host system.
This switch cannot be used when invoking
gnatmakewith several
file_names.
-p
Same as
--create-missing-dirs
-Pproject
Use project file
project. Only one such switch can be used.
-q
Quiet. When this flag is not set, the commands carried out by
gnatmakeare displayed.
-s
Recompile if compiler switches have changed since last compilation. All compiler switches but -I and -o are taken into account in the following way: orders between different ‘first letter’ switches are ignored, but orders between same switches are taken into account. For example,
-O -O2is different than
-O2 -O, but
-g -Ois equivalent to
-O -g.
This switch is recommended when Integrated Preprocessing is used.
-u
Unique. Recompile at most the main files. It implies -c. Combined with -f, it is equivalent to calling the compiler directly. Note that using -u with a project file and no main has a special meaning.
.
-v
Verbose. Display the reason for all recompilations
gnatmakedecides are necessary, with the highest verbosity level.
-vl
Verbosity level Low. Display fewer lines than in verbosity Medium.
-vm
Verbosity level Medium. Potentially display fewer lines than in verbosity High.
-vh
Verbosity level High. Equivalent to -v.
-vPx
Indicate the verbosity of the parsing of GNAT project files. See Switches Related to Project Files.
-xis used, mains specified on the command line need to be sources of a project file.
-Xname=value
Indicate that external variable
namehas the value
value. The Project Manager will use this value for occurrences of
external(name)when parsing the project file. Switches Related to Project Files.
-z
No main subprogram. Bind and link the program even if the unit name given on the command line is a package name. The resulting executable will execute the elaboration routines of the package and its closure, then the finalization routines.
GCC switches
Any uppercase or multi-character switch that is not a
gnatmake switch
is passed to
gcc (e.g.,
-O,
-gnato, etc.)
Source and library search path switches
-aIdir
When looking for source files also look in directory
dir. The order in which source files search is undertaken is described in Search Paths and the Run-Time Library (RTL).
-aLdir
Consider
diras being an externally provided Ada library. Instructs
gnatmaketo skip compilation units whose
.ALIfiles have been located in directory
dir. This allows you to have missing bodies for the units in
dirand to ignore out of date bodies for the same units.
When searching for library and object files, look in directory
dir. The order in which library files are searched is described in Search Paths for gnatbind.
-Adir
Equivalent to
-aLdir
-aIdir.
-Idir
Equivalent to
-aOdir -aIdir.
-I-
Do not look for source files in the directory containing the source file named in the command line. Do not look for ALI or object files in the directory where
gnatmakewas invoked.
-Ldir
Add directory
dirto the list of directories in which the linker will search for libraries. This is equivalent to
-largs
-Ldir. Furthermore, under Windows, the sources pointed to by the libraries path set in the registry are not searched for.
-nostdinc
Do not look for source files in the system default directory.
-nostdlib
Do not look for library files in the system default directory.
--RTS=rts-path
Specifies the default location of the run-time library. GNAT looks for the run-time in the following directories, and stops as soon as a valid run-time is found (
adaincludeor
ada_source_path, and
adalibor
ada_object_pathpresent):
<current directory>/$rts_path
<default-search-dir>/$rts_path
<default-search-dir>/rts-$rts_path
The selected path is handled like a normal RTS path.
4.1.3. Mode Switches for
gnatmake¶.
-cargs switches
Compiler switches. Here
switchesis a list of switches that are valid switches for
gcc. They will be passed on to all compile steps performed by
gnatmake.
-bargs switches
Binder switches. Here
switchesis a list of switches that are valid switches for
gnatbind. They will be passed on to all bind steps performed by
gnatmake.
-largs switches
Linker switches. Here
switchesis a list of switches that are valid switches for
gnatlink. They will be passed on to all link steps performed by
gnatmake.
-margs switches
Make switches. The switches are directly interpreted by
gnatmake, regardless of any previous occurrence of
-cargs,
-bargsor
-largs.
4.1.4. Notes on the Command Line¶
This section contains some additional useful notes on the operation
of the
gnatmake command.
If
gnatmakefinds no ALI files, it recompiles the main program and all other units required by the main program. This means that
gnatmakecan be used for the initial compile, as well as during subsequent steps of the development cycle.
If you enter
gnatmake foo.adb, where
foois a subunit or body of a generic unit,
gnatmakerecompiles
foo.adb(because it finds no ALI) and stops, issuing a warning.
In
gnatmakethe switch
-Iis used to specify both source and library file paths. Use
-aIinstead if you just want to specify source paths only and
-aOif you want to specify library paths only.
gnatmakewill ignore any files whose ALI file is write-protected. This may conveniently be used to exclude standard libraries from consideration and in particular it means that the use of the
-fswitch will not recompile these files unless
-a`include-dir` -aL`obj-dir` main
Using
gnatmakealong with the
-m (minimal recompilation)switch provides a mechanism for avoiding unnecessary recomp.
4.1.5. How
gnatmake Works¶ ‘upside.
4.1.6. Examples of
gnatmake Usage¶
- gnatmake hello.adb
Compile all files necessary to bind and link the main program
hello.adb(containing unit
Hello) and bind and link the resulting object files to generate an executable file
hello.
- gnatmake main1 main2 main3
Compile all files necessary to bind and link the main programs
main1.adb(containing unit
Main1),
main2.adb(containing unit
Main2) and
main3.adb(containing unit
Main3) and bind and link the resulting object files to generate three executable files
main1,
main2and
main3.
- gnatmake -q Main_Unit -cargs -O2 -bargs -l
Compile all files necessary to bind and link the main program unit
Main_Unit(from file
main_unit.adb). All compilations will be done with optimization level 2 and the order of elaboration will be listed by the binder.
gnatmakewill operate in quiet mode, not displaying commands it is executing.
4.2. Compiling with
gcc¶
This section discusses how to compile Ada programs using the
gcc
command. It also describes the set of switches
that can be used to control the behavior of the compiler.
4.2.1. Compiling Programs¶.
TESTING: the
--foobarNN switch two separate
files to be compiled:
$.
4.2.2. Search Paths and the Run-Time Library (RTL)¶:
The directory containing the source file of the main unit being compiled (the file name on the command line).
Each directory named by an
-Iswitch given on the
gcccommand line, in the order given.
Each of the directories listed in the text file whose name is given by the
ADA_PRJ_INCLUDE_FILEenvironment variable.
ADA_PRJ_INCLUDE_FILEis normally set by gnatmake or by the gnat driver when project files are used. It should not normally be set by other means.
Each of the directories listed in the value of the
ADA_INCLUDE_PATHenvironment variable. Construct this value exactly as the
PATHenvironment variable: a list of directory names separated by colons (semicolons when working with the NT version).
The content of the
ada_source_pathfile which is part of the GNAT installation tree and is used to store standard libraries such as the GNAT Run Time Library (RTL) source files. Installing a library.
4.2.3. Order of Compilation Issues¶:
There is no point in compiling: performing a compilation never obsoletes anything. The only way you can obsolete something and require recompilations is to modify one of the source files on which it depends.
There is no library as such, apart from the ALI files .
4.2.4. Examples¶
The following are some typical Ada compilation command line examples:
$ gcc -c xyz.adb
Compile body in file
xyz.adb with all default options.
$ gcc -c -O2 -gnata xyz-def.adb
Compile the child unit package in file
xyz-def.adb with extensive
optimizations, and pragma
Assert/Debug statements
enabled.
$ gcc -c -gnatc abc-def.adb
Compile the subunit in file
abc-def.adb in semantic-checking-only
mode.
4.3. Compiler Switches¶.
4.3.1. Alphabetical List of All Switches¶
-b target
Compile your program to run on
target, which is the name of a system configuration. You must have a GNAT cross-compiler built if
targetis not the same as your host system.
-Bdir
Load compiler executables (for example,
gnat1, the Ada compiler) from
dirinstead of the default location. Only use this switch when multiple versions of the GNAT compiler are available. See the “Options for Directory Search” section in the Using the GNU Compiler Collection (GCC) manual for further details. You would normally use the
-bor
-Vswitch instead.
-c
Compile. Always use this switch when compiling Ada programs.
Note: for some other languages when using
gcc, notably in the case of C and C++, it is possible to use use
gccwithout a
-cswitch to compile and link in one step. In the case of GNAT, you cannot use this approach, because the binder must be run and
gcccannot be used to run the GNAT binder.
sumarker is specified, the callgraph is decorated with stack usage information; it is equivalent to
-fstack-usage. When the
damarker is specified, the callgraph is decorated with information about dynamically allocated objects.
-fdiagnostics-format=json
Makes GNAT emit warning and error messages as JSON. Inhibits printing of text warning and errors messages except if
-gnatvor
-gnatlare present.
-fdump-scos
Generates SCO (Source Coverage Obligation) information in the ALI file. This information is used by advanced coverage tools. See unit
SCOsin the compiler sources for details in files
scos.adsand
scos.adb.
-fgnat-encodings=[all|gdb|minimal]
This switch controls the balance between GNAT encodings and standard DWARF emitted in the debug information.
-flto[=n]
Enables Link Time Optimization. This switch must be used in conjunction with the
-Oxswitches (but not with the
-gnatnswitch
-gnatnswitch. The drawback of this approach is that it may require more memory and that the debugging information generated by -g with it might be hardly usable. The switch, as well as the accompanying
-Oxswitches, must be specified both for the compilation and the link phases. If the
nparameter is specified, the optimization and final code generation at link time are executed using
nparallel jobs by means of an installed
makeprogram.
-fno-inline
Suppresses all inlining, unless requested with pragma
Inline_Always. The effect is enforced regardless of other optimization or inlining switches. Note that inlining can also be suppressed on a finer-grained basis with pragma
No_Inline.
-fno-inline-functions
Suppresses automatic inlining of subprograms, which is enabled if
-O3is used.
-fno-inline-small-functions
Suppresses automatic inlining of small subprograms, which is enabled if
-O2is used.
-fno-inline-functions-called-once
Suppresses inlining of subprograms local to the unit and called once from within it, which is enabled if
-O1is used.
-fno-ivopts
Suppresses high-level loop induction variable optimizations, which are enabled if
-O1is used. These optimizations are generally profitable but, for some specific cases of loops with numerous uses of the iteration variable that follow a common pattern, they may end up destroying the regularity that could be exploited at a lower level and thus producing inferior code.
-fno-strict-aliasing
Causes the compiler to avoid assumptions regarding non-aliasing of objects of different types. See Optimization and Strict Aliasing for details.
-gnato0for very peculiar cases of low-level programming.
-fstack-check
Activates stack checking. See Stack Overflow Checking for details.
-fstack-usage
Makes the compiler output stack usage information for the program, on a per-subprogram basis. See Static Stack Usage Analysis for details.
-g
Generate debugging information. This information is stored in the object file and copied from there to the final executable file by the linker, where it can be read by the debugger. You must use the
-gswitch if you plan on using the debugger.
-gnat05
Allow full Ada 2005 features.
-gnat12
Allow full Ada 2012 features.
-gnat2005
Allow full Ada 2005 features (same as
-gnat05)
-gnat2012
Allow full Ada 2012 features (same as
-gnat12)
-gnat2022
Allow full Ada 2022 features
-gnat83
Enforce Ada 83 restrictions.
-gnat95
Enforce Ada 95 restrictions.
Note: for compatibility with some Ada 95 compilers which support only the
overridingkeyword of Ada 2005, the
-gnatd.Dswitch can be used along with
-gnat95to achieve a similar effect with GNAT.
-gnatd.Dinstructs GNAT to consider
overridingas a keyword and handle its associated semantic checks, even in Ada 95 mode.
-gnata
Assertions enabled..
-gnatA
Avoid processing
gnat.adc. If a
gnat.adcfile is present, it will be ignored.
-gnatb
Generate brief messages to
stderreven if verbose mode set.
-gnatB
Assume no invalid (bad) values except for ‘Valid attribute use (Validity Checking).
-gnatc
Check syntax and semantics only (no code generation attempted). When the compiler is invoked by
gnatmake, if the switch
-gnatcis only given to the compiler (after
-cargsor in package Compiler of the project file,
gnatmakewill fail because it will not find the object file after compilation. If
gnatmakeis called with
-gnatcas a builder switch (before
-cargsor in package Builder of the project file) then
gnatmakewill not fail because it will not look for the object files after compilation, and it will not try to build and link.
-gnatC
Generate CodePeer intermediate format (no code generation attempted). This switch will generate an intermediate representation suitable for use by CodePeer (
.scilfiles). This switch is not compatible with code generation (it will, among other things, disable some switches such as -gnatn, and enable others such as -gnata).
-gnatd
Specify debug options for the compiler. The string of characters after the
-gnatdspecifies the specific debug options. The possible characters are 0-9, a-z, A-Z, optionally preceded by a dot or underscore. See compiler source file
debug.adbfor details of the implemented debug options. Certain debug options are relevant to applications programmers, and these are documented at appropriate points in this users guide.
-gnatD
Create expanded source files for source level debugging. This switch also suppresses generation of cross-reference information (see
-gnatx). Note that this switch is not allowed if a previous -gnatR switch has been given, since these two switches are not compatible.
fails with a
Program_Errorat run time because the actuals for
Val_1and
Val_2denote the same object. The second call executes without raising an exception because
Self(Obj)produces an anonymous object which does not share the memory location of
Obj.
-gnateb
Store configuration files by their basename in ALI files. This switch is used for instance by gprbuild for distributed builds in order to prevent issues where machine-specific absolute paths could end up being stored in ALI files.
-gnatec=path
Specify a configuration pragma file (the equal sign is optional) (The Configuration Pragmas Files).
-gnateC
Generate CodePeer messages in a compiler-like format. This switch is only effective if
-gnatcCis also specified and requires an installation of CodePeer.
-gnated
Disable atomic synchronization
-gnateDsymbol[=value]
Defines a symbol, associated with
value, for preprocessing. (Integrated Preprocessing).
.
-gnatef
Display full source path name in brief error messages.
-gnateF
Check for overflow on all floating-point operations, including those for unconstrained predefined types. See description of pragma
Check_Float_Overflowin GNAT RM.
-gnateg
-gnatceg
The
-gnatcswitch must always be specified before this switch, e.g.
-gnatceg. Generate a C header from the Ada input file. See Generating C Headers for Ada Specifications for more information.
-gnateG
Save result of preprocessing in a text file.
-gnateinnn
Set maximum number of instantiations during compilation of a single unit to
nnn. This may be useful in increasing the default maximum of 8000 for the rare case when a single unit legitimately exceeds this limit.
-gnateInnn
Indicates that the source is a multi-unit source and that the index of the unit to compile is
nnn.
nnnneeds to be a positive number and need to be a valid index in the multi-unit source.
-gnatel
This switch can be used with the static elaboration model to issue info messages showing where implicit
pragma Elaborateand
pragma Elaborate_Allare.
-gnateL
This switch turns off the info messages about implicit elaboration pragmas.
-gnatem=path
Specify a mapping file (the equal sign is optional) (Units to Sources Mapping Files).
-gnatep=file
Specify a preprocessing data file (the equal sign is optional) (Integrated Preprocessing).
.
-gnateS
Synonym of
-fdump-scos, kept for backwards compatibility.
-gnatet=path
Generate target dependent information. The format of the output file is described in the section about switch
-gnateT.
denotes a natural integer value,
Posdenotesis the number of bits in a storage unit, the equivalent of GCC macro
BITS_PER_UNITdocumented as follows: Define this macro to be the number of bits in an addressable storage unit (byte); normally 8.
Bits_Per_Wordis the number of bits in a machine word, the equivalent of GCC macro
BITS_PER_WORDdocumentedis the maximum alignment that the compiler can choose by default for a type or object, which is also the maximum alignment that can be specified in GNAT. It is computed for GCC backends as
BIGGEST_ALIGNMENT / BITS_PER_UNITwhere GCC macro
BIGGEST_ALIGNMENTis documented as follows: Biggest alignment that any data type can require on this machine, in bits.
Max_Unaligned_Fieldis the maximum size for unaligned bit field, which is 64 for the majority of GCC targets (but can be different on some targets).
Strict_Alignmentis the equivalent of GCC macro
STRICT_ALIGNMENTdocumented as follows: Define this macro to be the value 1 if instructions will fail to work if given data not on the nominal alignment. If instructions will merely go slower in that case, define this macro as 0.
System_Allocator_Alignmentis the guaranteed alignment of data returned by calls to
malloc.
The format of the input file is as follows. First come the values of the variables defined above, with one line per value:
name value
where
nameis the name of the parameter, spelled out in full, and cased as in the above list, and
valueis the string name of the type (which can have single spaces embedded in the name (e.g. long double),
digsis the number of digits for the floating-point type,
float_repis the float representation (I for IEEE-754-Binary, which is the only one supported at this time),
sizeis the size in bits,
alignment
-gnateu
Ignore unrecognized validity, warning, and style switches that appear after this switch is given. This may be useful when compiling sources developed on a later version of the compiler with an earlier version. Of course the earlier version must support this switch.
-gnateV
Check that all actual parameters of a subprogram call are valid according to the rules of validity checking (Validity Checking).
-gnateY
Ignore all STYLE_CHECKS pragmas. Full legality checks are still carried out, but the pragmas have no effect on what style checks are active. This allows all style checking options to be controlled from the command line.
-gnatE
Dynamic elaboration checking mode enabled. For further details see Elaboration Order Handling in GNAT.
-gnatf
Full errors. Multiple errors per line, all undefined references, do not attempt to suppress cascaded errors.
-gnatF
Externals names are folded to all uppercase.
-gnatg
Internal GNAT implementation mode. This should not be used for applications programs, it is intended only for use by the compiler and its run-time library. For documentation, see the GNAT sources. Note that
-gnatgimplies
-gnatw.geand
-gnatygso that all standard warnings and all standard style options are turned on. All warnings and style messages are treated as errors.
-gnatG=nn
List generated expanded code in source form.
-gnath
Output usage information. The output is written to
stdout.
-gnatH
Legacy elaboration-checking mode enabled. When this switch is in effect, the pre-18.x access-before-elaboration model becomes the de facto model. For further details see Elaboration Order Handling in GNAT.
-gnatic
Identifier character set (
c= 1/2/3/4/5/9/p/8/f/n/w). For details of the possible selections for
c, see Character Set Control.
.
-gnatjnn
Reformat error messages to fit on
nncharacter lines
‘Access
Requeue statements
Select statements
Synchronous task suspension
and does not emit compile-time diagnostics or run-time checks. For further details see Elaboration Order Handling in GNAT.
-gnatk=n
Limit file names to
n(1-999) characters (
k= krunch).
-gnatl
Output full source listing with embedded error messages.
-gnatL
Used in conjunction with -gnatG or -gnatD to intersperse original source lines (as comment lines with line numbers) in the expanded source output.
-gnatm=n
Limit number of detected error or warning messages to
nwhere
nis in the range 1..999999.. The equal sign here is optional. A value of zero means that no limit applies.
-gnatn[12]
Activate inlining across units for subprograms for which pragma
Inline.
-gnatN
Activate front end inlining for subprograms for which pragma
Inlineis specified. This inlining is performed by the front end and will be visible in the
-gnatGoutput.
When using a gcc-based back end, then the use of
-gnatNis deprecated, and the use of
-gnatnis preferred. Historically front end inlining was more extensive than the gcc back end inlining, but that is no longer the case.
-gnato0
Suppresses overflow checking. This causes the behavior of the compiler to match the default for older versions where overflow checking was suppressed by default. This is equivalent to having
pragma Suppress (Overflow_Check)in a configuration pragma file.
-gnato??
Set default mode for handling generation of code to avoid intermediate arithmetic overflow. Here
??is two digits, a single digit, or nothing. Each digit is one of the digits
1through
3:
If only one digit appears, then it applies to all cases; if two digits are given, then the first applies outside assertions, pre/postconditions, and type invariants, and the second applies within assertions, pre/postconditions, and type invariants.
If no digits follow the
-gnato, then it is equivalent to
-gnato11, causing all intermediate overflows to be handled in strict mode.
This switch also causes arithmetic overflow checking to be performed (as though
pragma Unsuppress (Overflow_Check)had been specified).
The default if no option
-gnatois given is that overflow handling is in
STRICTmode (computations done using the base type), and that overflow checking is enabled.
Note that division by zero is a separate check that is not controlled by this switch (divide-by-zero checking is on by default).
See also Specifying the Desired Mode.
-gnatp
Suppress all checks. See Run-Time Checks for details. This switch has no effect if cancelled by a subsequent
-gnat-pswitch.
-gnat-p
Cancel effect of previous
-gnatpswitch.
-gnatq
Don’t quit. Try semantics, even if parse errors.
-gnatQ
Don’t quit. Generate
ALIand tree files even if illegalities. Note that code generation is still suppressed in the presence of any errors, so even with
-gnatQno object file is generated.
-gnatr
Treat pragma Restrictions as Restriction_Warnings.
-gnatR[0|1|2|3|4][e][j][m][s]
Output representation information for declared types, objects and subprograms. Note that this switch is not allowed if a previous
-gnatDswitch has been given, since these two switches are not compatible.
-gnats
Syntax check only.
-gnatS
Print package Standard.
-gnatTnnn
All compiler tables start at
nnntimes usual starting size.
-gnatu
List units for this compilation.
-gnatU
Tag all error messages with the unique string ‘error:’
-gnatv
Verbose mode. Full error output with source lines to
stdout.
-gnatV
Control level of validity checking (Validity Checking).
-gnatwxxx
Warning mode where
xxxis a string of option letters that denotes the exact warnings that are enabled or disabled (Warning Message Control).
-gnatWe
Wide character encoding method (
e=n/h/u/s/e/8).
-gnatx
Suppress generation of cross-reference information.
-gnatX
Enable GNAT implementation extensions and latest Ada version.
-gnaty
Enable built-in style checks (Style Checking).
-gnatzm
Distribution stub generation and compilation (
m=r/c for receiver/caller stubs).
-Idir
Direct GNAT to search the
dirdirectory for source files needed by the current compilation (see Search Paths and the Run-Time Library (RTL)).
-I-
Except for the source file named in the command line, do not look for source files in the directory containing the source file named in the command line (see Search Paths and the Run-Time Library (RTL)).
-o file
This switch is used in
gccto redirect the generated object file and its associated ALI file. Beware of this switch with GNAT, because it may cause the object file and ALI file to have different names which in turn may confuse the binder and the linker.
-nostdinc
Inhibit the search of the default location for the GNAT Run Time Library (RTL) source files.
-nostdlib
Inhibit the search of the default location for the GNAT Run Time Library (RTL) ALI files.
-O[n]
ncontrols the optimization level:
See also Optimization Levels.
-pass-exit-codes
Catch exit codes from the compiler and use the most meaningful as exit status.
--RTS=rts-path
Specifies the default location of the run-time library. Same meaning as the equivalent
gnatmakeflag (Switches for gnatmake).
-S
Used in place of
-cto cause the assembler source file to be generated, using
.sas the extension, instead of the object file. This may be useful if you need to examine the generated assembly code.
-fverbose-asm
Used in conjunction with
-Sto cause the generated assembly code file to be annotated with variable names, making it significantly easier to follow.
-v
Show commands generated by the
gccdriver. Normally used only for debugging purposes or if you need to be sure what version of the compiler you are executing.
-V ver
Execute
verversion of the compiler. This is the
gccversion, not the GNAT version.
-w
Turn off warnings generated by the back end of the compiler. Use of this switch also causes the default for front end warnings to be set to suppress (as though
-gnatwshad appeared at the start of the options). switch
-gnatcif combined with other switches must come first in the string.
The switch
-gnatsif combined with other switches must come first in the string.
The switches
-gnatzcand
-gnatzrmay not be combined with any other switches, and only one of them may appear in the command line.
The switch
-gnat-pmay not be combined with any other switch.
Once a ‘y’ appears in the string (that is a use of the
-gnatyswitch), then all further characters in the switch are interpreted as style modifiers (see description of
-gnaty).
Once a ‘d’ appears in the string (that is a use of the
-gnatdswitch), then all further characters in the switch are interpreted as debug flags (see description of
-gnatd).
Once a ‘w’ appears in the string (that is a use of the
-gnatwswitch), then all further characters in the switch are interpreted as warning mode modifiers (see description of
-gnatw).
Once a ‘V’ appears in the string (that is a use of the
-gnatVswitch), then all further characters in the switch are interpreted as validity checking options (Validity Checking).
Option ‘em’, ‘ec’, ‘ep’, ‘l=’ and ‘R’ must be the last options in a combined list of options.
4.3.2. Output and Error Message Control¶
The standard default format for error messages is called ‘brief.
GNAT Studio can parse the error messages
and point to the referenced character.
The following switches provide control over the error message
format:
-gnatv
The
vstands for verbose. The effect of this setting is to write long-format error messages to
stdout(the standard output file.
The
lstands for list. This switch causes a full listing of the file to be generated. In the case where a body is compiled, the corresponding spec is also listed, along with any subunits. Typical output from compiling a package body
p.adbmight look like:
Compiling: p.adb 1. package body p is 2. procedure a; 3. procedure a is separate; 4. begin 5. null | >>> missing ";" 6. end; Compiling: p.ads 1. package p is 2. pragma Elaborate_Body | >>> missing ";" 3. end p; Compiling: p-a.adb 1. separate p | >>> missing "(" 2. procedure a is 3. begin 4. null | >>> missing ";" 5. end;
When you specify the
-gnatvor
-gnatlswitches and standard output is redirected, a brief summary is written to
stderr(standard error) giving the number of error messages and warning messages generated.
-gnatl=fname
This has the same effect as
-gnatlexcept that the output is written to a file instead of to standard output. If the given name
fnamedoes not start with a period, then it is the full name of the file to be written. If
fnameis an extension, it is appended to the name of the file being compiled. For example, if file
xyz.adbis compiled with
-gnatl=.lst, then the output is written to file xyz.adb.lst.
-gnatU
This switch forces all error messages to be preceded by the unique string ‘error:’. This means that error messages take a few more characters in space, but allows easy searching for and identification of error messages.
-gnatb
The
bstands for brief. This switch causes GNAT to generate the brief format error messages to
stderr(the standard error file) as well as the verbose format message or full listing (which as usual is written to
stdout(the standard output file).
-gnatm=n
The
mstands for maximum.
nis a decimal integer in the range of 1 to 999999 and limits the number of error or warning messages to be generated. For example, using
-gnatm2might yield
e.adb:3:04: Incorrect spelling of keyword "function" e.adb:5:35: missing ".." fatal error: maximum number of errors detected compilation abandoned. A value of zero means that no limit applies.
Note that the equal sign is optional, so the switches
-gnatm2and
-gnatm=2are equivalent.
-gnatf
Theswitchswitch also generates additional information for some error messages. Some examples are:
Details on possibly non-portable unchecked conversion
List possible interpretations for ambiguous calls
Additional details on incorrect parameters
-gnatjnn
In normal operation mode (or if
-gnatj0is used), then error messages with continuation lines are treated as though the continuation lines were separate messages (and so a warning with two continuation lines counts as three warnings, and is listed as three separate messages).
If the
-gnatjnnswitch.
-gnatq
The
qstands for quit (really ‘don.
-gnatQ
In normal operation mode, the
ALIfile is not generated if any illegalities are detected in the program. The use of
-gnatQforces generation of the
ALIfile. This file is marked as being in error, so it cannot be used for binding purposes, but it does contain reasonably complete cross-reference information, and thus may be useful for use by tools (e.g., semantic browsing tools or integrated development environments) that are driven from the
ALIfile. This switch implies
-gnatq, since the semantic phase must be run to get a meaningful ALI file.
When
-gnatQis used and the generated
ALIfile is marked as being in error,
gnatmakewill attempt to recompile the source when it finds such an
ALIfile, including with switch
-gnatc.
Note that
-gnatQhas no effect if
-gnatsis specified, since ALI files are never generated if
-gnatsis set.
4.3.3. Warning Message Control¶
In addition to error messages, which correspond to illegalities as defined in the Ada.statement
Duplicate accepts for the same task entry in a
Objects that take too much storage
Unchecked conversion between types of differing sizes
Missing
returnstatementusage that does not have any effect
Standard.Durationused the description of the pragma in the GNAT_Reference_manual).
-gnatwa
Activate most optional warnings.
This switch activates most optional warning messages. See the remaining list in this section for details on optional warning messages that can be individually controlled. The warnings that are not turned on by this switch are:
-gnatwd(implicit dereferencing)
-gnatw.d(tag warnings with -gnatw switch)
-gnatwh(hiding)
-gnatw.h(holes in record layouts)
-gnatw.j(late primitives of tagged types)
-gnatw.k(redefinition of names in standard)
-gnatwl(elaboration warnings)
-gnatw.l(inherited aspects)
-gnatw.n(atomic synchronization)
-gnatwo(address clause overlay)
-gnatw.o(values set by out parameters ignored)
-gnatw.q(questionable layout of record types)
-gnatw_r(out-of-order record representation clauses)
-gnatw.s(overridden size clause)
-gnatwt(tracking of deleted conditional code)
-gnatw.u(unordered enumeration)
-gnatw.w(use of Warnings Off)
-gnatw.y(reasons for package needing body)
All other optional warnings are turned on.
-gnatwA
Suppress all optional errors.
This switch suppresses all optional warning messages, see remaining list in this section for details on optional warning messages that can be individually controlled. Note that unlike switch
-gnatws, the use of switch
-gnatwAdoes not suppress warnings that are normally given unconditionally and cannot be individually controlled (for example, the warning about a missing exit path in a function). Also, again unlike switch
-gnatws, warnings suppressed by the use of switch
-gnatwAcan be individually turned back on. For example the use of switch
-gnatwAfollowed by switch
-gnatwdwill suppress all optional warnings except the warnings for implicit dereferencing.
-gnatw.a
Activate warnings on failing assertions.
This switch activates warnings for assertions where the compiler can tell at compile time that the assertion will fail. Note that this warning is given even if assertions are disabled. The default is that such warnings are generated.
-gnatw.A
Suppress warnings on failing assertions.
This switch suppresses warnings for assertions where the compiler can tell at compile time that the assertion will fail.
-gnatw_a
Activate warnings on anonymous allocators.
This switch activates warnings for allocators of anonymous access types, which can involve run-time accessibility checks and lead to unexpected accessibility violations. For more details on the rules involved, see RM 3.10.2 (14).
-gnatw_A
Supress warnings on anonymous allocators.
This switch suppresses warnings for anonymous access type allocators.
-gnatwb
Activate warnings on bad fixed values..
-gnatwB
Suppress warnings on bad fixed values.
This switch suppresses warnings for static fixed-point expressions whose value is not an exact multiple of Small.
-gnatw.b
Activate warnings on.
-gnatw.B
Suppress warnings on biased representation.
This switch suppresses warnings for representation clauses that force the use of biased representation.
-gnatwc
Activate warnings on conditionals. warnings.
This warning option also activates a special test for comparisons using the operators ‘>=’ and’ <=’. If the compiler can tell that only the equality condition is possible, then it will warn that the ‘>’ or ‘<’ part of the test is useless and that the operator could be replaced by ‘=’. An example would be comparing a
Naturalvariable <=.
-gnatwC
Suppress warnings on conditionals.
This switch suppresses warnings for conditional expressions used in tests that are known to be True or False at compile time.
-gnatw.c
Activate warnings on missing component clauses.
This switch activates warnings for record components where a record representation clause is present and has component clauses for the majority, but not all, of the components. A warning is given for each component for which no component clause is present.
-gnatw.C
Suppress warnings on missing component clauses.
This switch suppresses warnings for record components that are missing a component clause in the situation described above.
-gnatw_c
Activate warnings on unknown condition in Compile_Time_Warning.
This switch activates warnings on a pragma Compile_Time_Warning or Compile_Time_Error whose condition has a value that is not known at compile time. The default is that such warnings are generated.
-gnatw_C
Suppress warnings on unknown condition in Compile_Time_Warning.
This switch supresses warnings on a pragma Compile_Time_Warning or Compile_Time_Error whose condition has a value that is not known at compile time.
-gnatwd
Activate warnings on implicit dereferencing.
If this switch is set, then the use of a prefix of an access type in an indexed component, slice, or selected component without an explicit
.allwill generate a warning. With this warning enabled, access checks occur only at points where an explicit
.allappears in the source code (assuming no warnings are generated as a result of this switch). The default is that such warnings are not generated.
-gnatwD
Suppress warnings on implicit dereferencing.
This switch suppresses warnings for implicit dereferences in indexed components, slices, and selected components.
-gnatw.d
Activate tagging of warning and info messages.
If this switch is set, then warning messages are tagged, with one of the following strings:
[-gnatw?] Used to tag warnings controlled by the switch
-gnatwxwhere x is a letter a-z.
[-gnatw.?] Used to tag warnings controlled by the switch
-gnatw.xwhere x is a letter a-z.
[-gnatel] Used to tag elaboration information (info) messages generated when the static model of elaboration is used and the
-gnatelswitch
-gnatws.
-gnatw.D
Deactivate tagging of warning and info messages messages.
If this switch is set, then warning messages return to the default mode in which warnings and info messages are not tagged as described above for
-gnatw.d.
.
-gnatw.e
Activate every optional warning.
This switch activates all optional warnings, including those which are not activated.
-gnatwE
Treat all run-time exception warnings as errors.
This switch causes warning messages regarding errors that will be raised during run-time execution to be treated as errors.
-gnatwf
Activate warnings on unreferenced formals.
This switch causes a warning to be generated if a formal parameter is not referenced in the body of the subprogram. This warning can also be turned on using
-gnatwu. The default is that these warnings are not generated.
-gnatwF
Suppress warnings on unreferenced formals.
This switch suppresses warnings for unreferenced formal parameters. Note that the combination
-gnatwufollowed by
-gnatwFhas the effect of warning on unreferenced entities other than subprogram formals.
-gnatwg
Activate warnings on unrecognized pragmas.
This switch causes a warning to be generated if an unrecognized pragma is encountered. Apart from issuing this warning, the pragma is ignored and has no effect. The default is that such warnings are issued (satisfying the Ada Reference Manual requirement that such warnings appear).
-gnatwG
Suppress warnings on unrecognized pragmas.
This switch suppresses warnings for unrecognized pragmas.
-gnatw.g
Warnings used for GNAT sources.
This switch sets the warning categories that are used by the standard GNAT style. Currently this is equivalent to
-gnatwAao.q.s.CI.V.X.Zbut more warnings may be added in the future without advanced notice.
-gnatwh
Activate warnings on hiding.
This switch activates warnings on hiding declarations that are considered potentially confusing. Not all cases of hiding cause warnings; for example an overriding declaration hides an implicit declaration, which is just normal code. The default is that warnings on hiding are not generated.
-gnatwH
Suppress warnings on hiding.
This switch suppresses warnings on hiding declarations.
-gnatw.h
Activate warnings on holes/gaps in records.
This switch activates warnings on component clauses in record representation clauses that leave holes (gaps) in the record layout. If this warning option is active, then record representation clauses should specify a contiguous layout, adding unused fill fields if needed.
-gnatw.H
Suppress warnings on holes/gaps in records.
This switch suppresses warnings on component clauses in record representation clauses that leave holes (haps) in the record layout.
-gnatwi
Activate warnings on implementation units.
This switch activates warnings for a with of withed by user programs. The default is that such warnings are generated
-gnatwI
Disable warnings on implementation units.
This switch disables warnings for a with of an internal GNAT implementation unit.
.
-gnatw.I
Disable warnings on overlapping actuals.
This switch disables warnings on overlapping actuals in a call.
-gnatwj
Activate warnings on obsolescent features (Annex J).
If this warning option is activated, then warnings are generated for calls to subprograms marked with.
In addition to the above cases, warnings are also generated for GNAT features that have been provided in past versions but which have been superseded (typically by features in the new Ada standard). For example,
pragma Ravenscarwill be flagged since its function is replaced by
pragma Profile(Ravenscar), and
pragma Interface_Namewill be flagged since its function is replaced by
pragma Import.
Note that this warning option functions differently from the restriction
No_Obsolescent_Featuresin two respects. First, the restriction applies only to annex J features. Second, the restriction does flag uses of package
ASCII.
-gnatwJ
Suppress warnings on obsolescent features (Annex J).
This switch disables warnings on use of obsolescent features.
-gnatw.j
Activate warnings on late declarations of tagged type primitives.
This switch activates warnings on visible primitives added to a tagged type after deriving a private extension from it.
-gnatw.J
Suppress warnings on late declarations of tagged type primitives.
This switch suppresses warnings on visible primitives added to a tagged type after deriving a private extension from it.
-gnatwk
Activate warnings on variables that could be constants.
This switch activates warnings for variables that are initialized but never modified, and then could be declared constants. The default is that such warnings are not given.
-gnatwK
Suppress warnings on variables that could be constants.
This switch disables warnings on variables that could be declared constants.
.
-gnatw.K
Suppress warnings on redefinition of names in standard.
This switch disables warnings for declarations that declare a name that is defined in package Standard.
-gnatwl
Activate warnings for elaboration pragmas.
This switch activates warnings for possible elaboration problems, including suspicious use of
Elaboratepragmas, when using the static elaboration model, and possible situations that may raise
Program_Errorwhen using the dynamic elaboration model. See the section in this guide on elaboration checking for further details. The default is that such warnings are not generated.
-gnatwL
Suppress warnings for elaboration pragmas.
This switch suppresses warnings for possible elaboration problems.
-gnatw.l
List inherited aspects.
This switch causes the compiler to list inherited invariants, preconditions, and postconditions from Type_Invariant’Class, Invariant’Class, Pre’Class, and Post’Class aspects. Also list inherited subtype predicates.
-gnatw.L
Suppress listing of inherited aspects.
This switch suppresses listing of inherited aspects.
.
-gnatwM
Disable warnings on modified but unreferenced variables.
This switch disables warnings for variables that are assigned or initialized, but never read.
.
-gnatw.M
Disable warnings on suspicious modulus values.
This switch disables warnings for suspicious modulus values.
-gnatwn
Set normal warnings mode.
This switch sets normal warning mode, in which enabled warnings are issued and treated as warnings rather than errors. This is the default mode. the switch
-gnatwncan be used to cancel the effect of an explicit
-gnatwsor
-gnatwe. It also cancels the effect of the implicit
-gnatwethat is activated by the use of
-gnatg.
-gnatw.n
Activate warnings on atomic synchronization.
This switch actives warnings when an access to an atomic variable requires the generation of atomic synchronization code. These warnings are off by default.
-gnatw.N
Suppress warnings on atomic synchronization.
This switch suppresses warnings when an access to an atomic variable requires the generation of atomic synchronization code.
-gnatwo
Activate warnings on address clause overlays.
This switch activates warnings for possibly unintended initialization effects of defining address clauses that cause one variable to overlap another. The default is that such warnings are generated.
-gnatwO
Suppress warnings on address clause overlays.
This switch suppresses warnings on possibly unintended initialization effects of defining address clauses that cause one variable to overlap another.
.
-gnatwp
Activate warnings on ineffective pragma Inlines.
This switch activates warnings for failure of front end inlining (activated by
.
-gnatwP
Suppress warnings on ineffective pragma Inlines.
This switch suppresses warnings on ineffective pragma Inlines. If the inlining mechanism cannot inline a call, it will simply ignore the request silently.
.
-gnatw.P
Suppress warnings on parameter ordering.
This switch suppresses warnings on cases of suspicious parameter ordering.
-gnatw_p
Activate warnings for pedantic checks.
This switch activates warnings for the failure of certain pedantic checks. The only case currently supported is a check that the subtype_marks given for corresponding formal parameter and function results in a subprogram declaration and its body denote the same subtype declaration. The default is that such warnings are not given.
-gnatw_P
Suppress warnings for pedantic checks.
This switch suppresses warnings on violations of pedantic checks.
.
-gnatwQ
Suppress warnings on questionable missing parentheses.
This switch suppresses warnings for cases where the association is not clear and the use of parentheses is preferred.
.
-gnatw.Q
Suppress warnings on questionable layout of record types.
This switch suppresses warnings for cases where the default layout of a record type would very likely cause inefficiencies.
-gnatwr
Activate warnings on redundant constructs.
This switch activates warnings for redundant constructs. The following is the current list of constructs regarded as redundant:
Assignment of an item to itself.
Type conversion that converts an expression to its own type.
Use of the attribute
Basewhere
typ'Baseis the same as
typ.
Use of pragma
Packwhen.
-gnatwR
Suppress warnings on redundant constructs.
This switch suppresses warnings for redundant constructs.
-gnatw.r
Activate warnings for object renaming function.
This switch activates warnings for an object renaming that renames a function call, which is equivalent to a constant declaration (as opposed to renaming the function itself). The default is that these warnings are given.
-gnatw.R
Suppress warnings for object renaming function.
This switch suppresses warnings for object renaming function.
-gnatw_r
Activate warnings for out-of-order record representation clauses.
This switch activates warnings for record representation clauses, if the order of component declarations, component clauses, and bit-level layout do not all agree. The default is that these warnings are not given.
-gnatw_R
Suppress warnings for out-of-order record representation clauses.
-gnatwn.
Note that switch
-gnatwsdoes not suppress warnings from the
gccback end. To suppress these back end warnings as well, use the switch
-win addition to
-gnatws. Also this switch has no effect on the handling of style check messages.
.
-gnatw.S
Suppress warnings on overridden size clauses.
This switch suppresses warnings on component clauses in record representation clauses that override size clauses, and similar warnings when an array component size overrides a size clause.
-gnatwt
Activate warnings for tracking of deleted conditional code.
This switch activates warnings for tracking of code in conditionals (IF and CASE statements) that is detected to be dead code which cannot be executed, and which is removed by the front end. This warning is off by default. This may be useful for detecting deactivated code in certified applications.
-gnatwT
Suppress warnings for tracking of deleted conditional code.
This switch suppresses warnings for tracking of deleted conditional code.
-gnatw.t
Activate warnings on suspicious contracts.
This switch activates warnings on suspicious contracts. This includes warnings on suspicious postconditions (whether a pragma
Postconditionor a
Postaspect in Ada 2012) and suspicious contract cases (pragma or aspect
Contract_Cases).. This switch also controls warnings on suspicious cases of expressions typically found in contracts like quantified expressions and uses of Update attribute. The default is that such warnings are generated.
-gnatw.T
Suppress warnings on suspicious contracts.
This switch suppresses warnings on suspicious contracts.
-gnatwu
Activate warnings on unused entities.
This switch activates warnings to be generated for entities that are declared but not referenced, and for units that are withed withed but never instantiated. In the case where a package or subprogram body is compiled, and there is a with on the corresponding spec that is only referenced in the body, a warning is also generated, noting that the with can be moved to the body. The default is that such warnings are not generated. This switch also activates warnings on unreferenced formals (it includes the effect of
-gnatwf).
-gnatwU
Suppress warnings on unused entities.
This switch suppresses warnings for unused entities and packages. It also turns off warnings on unreferenced formals (and thus includes the effect of
-gnatwF).
-gnatw.u
Activate warnings on unordered enumeration types.
This switch causes enumeration types to be considered as conceptually unordered, unless an explicit pragma.
-gnatw.U
Deactivate warnings on unordered enumeration types.
This switch causes all enumeration types to be considered as ordered, so that no warnings are given for comparisons or subranges for any type.
-gnatwv
Activate warnings on unassigned variables.
This switch activates warnings for access to variables which may not be properly initialized. The default is that such warnings are generated. This switch will also be emitted when initializing an array or record object via the following aggregate:
Array_Or_Record : XXX := (others => <>);
unless the relevant type fully initializes all components.
-gnatwV
Suppress warnings on unassigned variables.
This switch suppresses warnings for access to variables which may not be properly initialized.
-gnatw.v
Activate info messages for non-default bit order.
This switch activates messages (labeled “info”, they are not warnings, just informational messages) about the effects of non-default bit-order on records to which a component clause is applied. The effect of specifying non-default bit ordering is a bit subtle (and changed with Ada 2005), so these messages, which are given by default, are useful in understanding the exact consequences of using this feature.
-gnatw.V
Suppress info messages for non-default bit order.
This switch suppresses information messages for the effects of specifying non-default bit order on record components with component clauses.
-gnatww
Activate warnings on wrong low bound assumption.
This switch activates warnings for indexing an unconstrained string parameter with a literal or S’Length. This is a case where the code is assuming that the low bound is one, which is in general not true (for example when a slice is passed). The default is that such warnings are generated.
-gnatwW
Suppress warnings on wrong low bound assumption.
This switch suppresses warnings for indexing an unconstrained string parameter with a literal or S’Length. Note that this warning can also be suppressed in a particular case by adding an assertion that the lower bound is 1, as shown in the following example:
procedure K (S : String) is pragma Assert (S'First = 1); ...
-gnatw.w
Activate warnings on Warnings Off pragmas.
This switch activates warnings for use of
pragma Warnings (Off, entity)where either the pragma is entirely useless (because it suppresses no warnings), or it could be replaced by
pragma Unreferencedor
pragma Unmodified. Also activates warnings for the case of Warnings (Off, String), where either there is no matching Warnings (On, String), or the Warnings (Off) did not suppress any warning. The default is that these warnings are not given.
-gnatw.W
Suppress warnings on unnecessary Warnings Off pragmas.
This switch suppresses warnings for use of
pragma Warnings (Off, ...).
-gnatwx
Activate warnings on Export/Import pragmas.
This switch activates warnings on Export/Import pragmas when the compiler detects a possible conflict between the Ada and foreign language calling sequences. For example, the use of default parameters in a convention C procedure is dubious because the C compiler cannot supply the proper default, so a warning is issued. The default is that such warnings are generated.
-gnatwX
Suppress warnings on Export/Import pragmas.
This switch suppresses warnings on Export/Import pragmas. The sense of this is that you are telling the compiler that you know what you are doing in writing the pragma, and it should not complain at you.
-gnatw.x
Activate warnings for No_Exception_Propagation mode.
This switch activates warnings for exception usage when pragma Restrictions (No_Exception_Propagation) is in effect. Warnings are given for implicit or explicit exception raises which are not covered by a local handler, and for exception handlers which do not cover a local raise. The default is that these warnings are given for units that contain exception handlers.
-gnatw.X
Disable warnings for No_Exception_Propagation mode.
This switch disables warnings for exception usage when pragma Restrictions (No_Exception_Propagation) is in effect.
-gnatwy
Activate warnings for Ada compatibility issues.
For the most part, newer versions of Ada are upwards compatible with older versions. For example, Ada 2005 programs will almost always work when compiled as Ada 2012. However there are some exceptions (for example the fact that.
-gnatwY
Disable warnings for Ada compatibility issues.
This switch suppresses the warnings intended to help in identifying incompatibilities between Ada language versions.
-gnatw.y
Activate information messages for why package spec needs body.
There are a number of cases in which a package spec needs a body. For example, the use of pragma Elaborate_Body, or the declaration of a procedure specification requiring a completion. This switch causes information messages to be output showing why a package specification requires a body. This can be useful in the case of a large package specification which is unexpectedly requiring a body. The default is that such information messages are not output.
-gnatw.Y
Disable information messages for why package spec needs body.
This switch suppresses the output of information messages showing why a package specification needs a body.
-gnatwz
Activate warnings on unchecked conversions.
This switch activates warnings for unchecked conversions where the types are known at compile time to have different sizes. The default is that such warnings are generated. Warnings are also generated for subprogram pointers with different conventions.
-gnatwZ
Suppress warnings on unchecked conversions.
This switch suppresses warnings for unchecked conversions where the types are known at compile time to have different sizes or conventions.
-gnatw.z
Activate warnings for size not a multiple of alignment.
This switch activates warnings for cases of array and record types with specified
Sizeand
Alignmentattributes where the size is not a multiple of the alignment, resulting in an object size that is greater than the specified size. The default is that such warnings are generated.
-gnatw.Z
Suppress warnings for size not a multiple of alignment.
This switch suppresses warnings for cases of array and record types with specified
Sizeand
Alignmentattributes where the size is not a multiple of the alignment, resulting in an object size that is greater than the specified size. The warning can also be suppressed by giving an explicit
Object_Sizevalue.
-Wunused
The warnings controlled by the
-gnatwswitch are generated by the front end of the compiler. The GCC back end can provide additional warnings and they are controlled by the
-Wswitch. For example,
-Wunusedactivates back end warnings for entities that are declared but not referenced.
-Wuninitialized
Similarly,
-Wuninitializedactivates the back end warning for uninitialized variables. This switch must be used in conjunction with an optimization level greater than zero.
-Wstack-usage=len
Warn if the stack usage of a subprogram might be larger than
lenbytes. See Static Stack Usage Analysis for details.
-Wall
This switch enables most warnings from the GCC back end. The code generator detects a number of warning situations that are missed by the GNAT front end, and this switch can be used to activate them. The use of this switch also sets the default front-end warning mode to
-gnatwa, that is, most front-end warnings are activated as well.
-w
Conversely, this switch suppresses warnings from the GCC back end. The use of this switch also sets the default front-end warning mode to
-gnatws, that is, front-end warnings are suppressed as well.
-Werror
This switch causes warnings from the GCC back end to be treated as errors. The warning string still appears, but the warning messages are counted as errors, and prevent the generation of an object file. The use of this switch also sets the default front-end warning mode to
-gnatwe, that is, front-end warning messages and style check messages are treated as errors as well.:
-
-gnatw.a
-
-gnatwB
-
-gnatw.b
-
-gnatwC
-
-gnatw.C
-
-gnatwD
-
-gnatw.D
-
-gnatwF
-
-gnatw.F
-
-gnatwg
-
-gnatwH
-
-gnatw.H
-
-gnatwi
-
-gnatwJ
-
-gnatw.J
-
-gnatwK
-
-gnatw.K
-
-gnatwL
-
-gnatw.L
-
-gnatwM
-
-gnatw.m
-
-gnatwn
-
-gnatw.N
-
-gnatwo
-
-gnatw.O
-
-gnatwP
-
-gnatw.P
-
-gnatwq
-
-gnatw.Q
-
-gnatwR
-
-gnatw.R
-
-gnatw.S
-
-gnatwT
-
-gnatw.t
-
-gnatwU
-
-gnatw.U
-
-gnatwv
-
-gnatw.v
-
-gnatww
-
-gnatw.W
-
-gnatwx
-
-gnatw.X
-
-gnatwy
-
-gnatw.Y
-
-gnatwz
-
-gnatw.z
4.3.4. Debugging and Assertion Control¶
-gnata
The
-gnataoption is equivalent to the following
Assertion_Policypragma:
pragma Assertion_Policy (Check);
Which is a shorthand for:
pragma Assertion_Policy (Assert => Check, Static_Predicate => Check, Dynamic_Predicate => Check, Pre => Check, Pre'Class => Check, Post => Check, Post'Class => Check, Type_Invariant => Check, Type_Invariant'Class => Check);
The pragmas
Assertand
Debugnormally have no effect and are ignored. This switch, where
astands for ‘assert’, causes pragmas
Assertand
Debugto be activated. This switch also causes preconditions, postconditions, subtype predicates, and type invariants to be activated.
The pragmas have the form:
pragma Assert (<Boolean-expression> [, <static-string-expression>]) pragma Debug (<procedure call>) pragma Type_Invariant (<type-local-name>, <Boolean-expression>) pragma Predicate (<type-local-name>, <Boolean-expression>) pragma Precondition (<Boolean-expression>, <string-expression>) pragma Postcondition (<Boolean-expression>, <string-expression>)
The aspects have the form:
with [Pre|Post|Type_Invariant|Dynamic_Predicate|Static_Predicate] => <Boolean-expression>;
The
Assertpragma causes
Boolean-expressionto be tested. If the result is
True, the pragma has no effect (other than possible side effects from evaluating the expression). If the result is
False, the exception
Assert_Failuredeclared in the package
System.Assertionsis raised (passing
static-string-expression, if present, as the message associated with the exception). If no string expression is given, the default is a string containing the file name and line number of the pragma.
The
Debugpragma causes
procedureto be called. Note that
pragma Debugmay appear within a declaration sequence, allowing debugging procedures to be called between declarations.
For the aspect specification, the
Boolean-expressionis evaluated. If the result is
True, the aspect has no effect. If the result is
False, the exception
Assert_Failureis raised.
4.3.5. Validity Checking¶xxis equivalent to
gnatVcdfimoprst.
VDis specified, a subsequent switch
-gnatVdwill leave the checks turned on. Switch
-gnatVDshouldor
-gnatVfi(the order does not matter) specifies that floating-point parameters of mode
inshould be validity checked.
-gnatVi
Validity checks for ``in`` mode parameters.
Arguments for parameters of mode
inare validity checked in function and procedure calls at the point of call.
-gnatVm
Validity checks for ``in out`` modepsuppresses all run-time checks, including validity checks, and thus implies
-gnatVn. When this switch is used, it cancels any other
-gnatVpreviously issued.
-gnatVo
Validity checks for operator and attribute operands.
Arguments for predefined operators and attributes are validity checked. This includes all operators in package
Standard, the shift operators defined as intrinsic in package
Interfacesand
-gnatVmwhichis set, then this assumption is not made, and parameters are not assumed to be valid, so their validity will be checked (or rechecked) within the subprogram.
-gnatVr
Validity checks for function returns.
The expression in
returnstatements.
4.3.6. Style Checking¶ message is given, preceded by
the character sequence ‘ the selected style checking options may:
-gnaty0
Specify indentation level.
If a digit from 1-9 appears in the string after
-gnatythen proper indentation is checked, with the digit indicating the indentation level required. A value of zero turns off this style check. The rule checks that the following constructs start on a column that is a multiple of the alignment level:
beginnings of declarations (except record component declarations) and statements;
beginnings of the structural components of compound statements;
endkeyword that completes the declaration of a program unit declaration or body or that completes a compound statement.used.
The
--that starts the column must either start in column one, or else at least one blank must precede this sequence.
Comments that follow other tokens on a line must have at least one blank following the
--at the start of the comment.
Full line comments must have at least two blanks following the
--that starts the comment, with the following exceptions.
A line consisting only of the
--characters, possibly preceded by blanks is permitted.
A comment starting with
--xwhere
xis a special character is permitted. This allows proper processing of the output from specialized tools such as
gnatprep(where
--!is used) and in earlier versionsis used).
A line consisting entirely of minus signs, possibly preceded by blanks, is permitted. This allows the construction of box comments where lines of minus signs are used to form the top and bottom of the box.
A comment that starts and ends with
-exceptyD
Check declared identifiers in mixed case.
Declared identifiers must be in mixed case, as in This_Is_An_Identifier. Use -gnatyr in addition to ensure that references match declarations.
-gnatye
Check end/exit labels.
Optional labels on
endstatements ending subprograms and on
exitstatementsmust appear either on the same line as corresponding
if, or on a line on its own, lined up under the
if.
-gnatyI
check mode IN keywords.
Mode
in(the default mode) is not allowed to be given explicitly.
in outis fine, but not
inon its own.
-gnatyk
Check keyword casing.
All keywords must be in lower case (with the exception of keywords such as
digitsused as attribute names to which this check does not apply). A single error is reported for each line breaking this rule even if multiple casing issues exist on a same line.
-gnatyl
Check layout. up under the block label. For example both the following are permitted:
Block : declare A : Integer := 3; begin Proc (A, A); end Block; Block : declare A : Integer := 3; begin Proc (A, A); end Block;
The same alternative format is allowed for loops. For example, both of the following are permitted:
Clear : while J < 10 loop A (J) := 0; end loop Clear; Clear : while J < 10 loop A (J) := 0; end loop Clear;
(‘specs’)or
elsekeyword following the keyword in an
ifstatement.
or elseand
and thenare not affected, and a special exception allows a pragma to appear after
else.
-gnatyt
Check token spacing.
The following token spacing rules are enforced:
The keywords
absand
notmust be followed by a space.
The token
=>must be surrounded by spaces.
The token
<>must be preceded by a space or a left parenthesis.
Binary operators other than
**must be surrounded by spaces. There is no restriction on the layout of the
**binary operator.
Colon must be surrounded by spaces.
Colon-equal (assignment, initialization) must be surrounded by spaces.
Comma must be the first non-blank character on the line, or be immediately preceded by a non-blank character, and must be followed by a space.
If the token preceding a left parenthesis ends with a letter or digit, then a space must separate the two tokens.
If the token following a right parenthesis starts with a letter or digit, then a space must separate the two tokens.
A right parenthesis must either be the first non-blank character on a line, or it must be preceded by a non-blank character.
A semicolon must not be preceded by a space, and must not be followed by a non-blank character.
A unary plus or minus may not be followed by a space.
A vertical bar must be surrounded by spaces.
Exactly one blank (and no other white space) must appear between a
nottoken and a following
intoken.
statements,
whilestatements and
exitstatements.
-gnatyy
Set all standard style check options.
This is equivalent to.
-gnaty-
Remove style check options.
This causes any subsequent options in the string to act as canceling the corresponding style check option. To cancel maximum nesting level control, use the
Lparameter without any integer value after that, because any digit following - in the parameter string of the
-gnatyoption will be treated as canceling the indentation check. The same is true for the
Mparameter.
yand
Nparameters are not allowed after -.
-gnaty+
Enable style check options.
This causes any subsequent options in the string to enable the corresponding style check option. That is, it cancels the effect of a previous -, if any. ‘ the use of
-gnatyy as described above, that is all
built-in standard style check options are enabled.
The switch
-gnatyN clears any previously set style checks.
4.3.7. Run-Time Checks¶
By default, the following checks are suppressed: stack overflow
checks, and checks for access before elaboration on subprogram
calls. All other checks, including overflow checks, range checks and
array bounds checks, are turned on by default. The following
gcc
switches refine this default behavior.
-gnatp
This switch causes the unit to be compiled as though
pragma Suppress (All_checks)had been present in the source. Validity checks are also eliminated (in other words
-gnatpalso ‘raise’, erroneous execution if that assumption is wrong.
The checks subject to suppression include all the checks defined by the Ada standard, the additional implementation defined checks
Alignment_Check,
Duplicated_Tag_Check,
Predicate_Check,
Container_Checks,
Tampering_Check, and
Validity_Check, as well as any checks introduced using
pragma Check_Name. Note that
Atomic_Synchronizationis not automatically suppressed by use of this option.
If the code depends on certain checks being active, you can use pragma
Unsuppresseither as a configuration pragma or as a local pragma to make sure that a specified check is performed even if
gnatpis specified.
The
-gnatpswitch has no effect if a subsequent
-gnat-pswitch appears.
-gnat-p
This switch cancels the effect of a previous
gnatpswitch.
-gnato??
This switch controls the mode used for computing intermediate arithmetic integer operations, and also enables overflow checking. For a full description of overflow mode and checking control, see the ‘Overflow Check Handling in GNAT’ appendix in this User’s Guide.
Overflow checks are always enabled by this switch. The argument controls the mode, using the codes
- 1 = STRICT
In STRICT mode, intermediate operations are always done using the base type, and overflow checking ensures that the result is within the base type range.
- 2 = MINIMIZED
In MINIMIZED mode, overflows in intermediate operations are avoided where possible by using a larger integer type for the computation (typically
Long_Long_Integer). Overflow checking ensures that the result fits in this larger integer type.
- 3 = ELIMINATED
In ELIMINATED mode, overflows in intermediate operations are avoided by using multi-precision arithmetic. In this case, overflow checking has no effect on intermediate operations (since overflow is impossible).
If two digits are present after
-gnatotin previous versions of GNAT.
Note that the
-gnato??switch does not affect the code generated for any floating-point operations; it applies only to integer semantics. For floating-point, GNAT has the
Machine_Overflowsattribute set to
Falseand the normal mode of operation is to generate IEEE NaN and infinite values on overflow or invalid operations (such as dividing 0.0 by 0.0).
The reason that we distinguish overflow checking from other kinds of range constraint checking is that a failure of an overflow check,11(equivalent to
-gnato1), so overflow checking is performed in STRICT mode by default.
-gnatE
Enables dynamic checks for access-before-elaboration on subprogram calls and generic instantiations. Note that
-gnatEis not necessary for safety, because in the default mode, GNAT ensures statically that the checks would not fail. For full details of the effect and use of this switch, Compiling with gcc.
-fstack-check
Activates stack overflow checking. For full details of the effect and use of this switch see Stack Overflow Checking.
The setting of these switches only controls the default setting of the
checks. You may modify them using either
Suppress (to remove
checks) or
Unsuppress (to add back suppressed checks) pragmas in
the program source.
4.3.8. Using
gcc for Syntax Checking¶
-gnats
The
sstands for ‘syntax’.
Run GNAT in syntax checking only mode. For example, the command
$ gcc -c -gnats x.adb
compiles file
x.adbin syntax-check-only mode. You can check a series of files in a single command , and can use wildcards to specify such a group of files. Note that you must specify the
-c(compile only) flag in addition to the
-gnatsflag.
You may use other switches in conjunction with
-gnats. In particular,
-gnatland
-gnatvarewiths a unit
Y, compiling unit
Xutility (Renaming Files with gnatchop).
4.3.9. Using
gcc for Semantic Checking¶
-gnatc
The
cstands for ‘check’. needed source files must be accessible (see Search Paths and the Run-Time Library (RTL)).
Each file must contain only one compilation unit.
The file name and unit name must match (File Naming Rules).
The output consists of error messages as appropriate. No object file is generated. An
ALIfile).
4.3.10. Compiling Different Versions of Ada¶
The switches described in this section allow you to explicitly specify the version of the Ada language that your programs are written in. The default mode is Ada 2012, but you can also specify Ada 95, Ada 2005 mode, or indicate Ada 83 compatibility mode.
-gnat83(Ada 83 Compatibility Mode)
Although GNAT is primarily an Ada 95 / Ada 2005 compiler, this switch specifies that the program is to be compiled in Ada 83 mode. With
-gnat83, GNAT rejects most post-Ada 83 extensions and applies Ada 83 semantics where this can be done easily. It is not possible to guarantee this switch does a perfect job; some subtle tests, such as are found in earlier ACVC tests (and that have been removed from the ACATS suite for Ada 95), might not compile correctly. Nevertheless, this switch may be useful in some circumstances, for example where, due to contractual reasons, existing code needs to be maintained using only Ada 83 features.
With few exceptions (most notably the need to use
<>on unconstrained generic formal parameters, the use of the new Ada 95 / Ada 2005 reserved words, and the use of packages with optional bodies), it is not necessary to specify the
-gnat83switch the Compatibility and Porting Guide chapter in the GNAT Reference Manual.
-gnat95(Ada 95 mode)
This switch directs the compiler to implement the Ada 95 version of the language. Since Ada 95 is almost completely upwards compatible with Ada 83, Ada 83 programs may generally be compiled using this switch (see the description of the
-gnat83switch for further information about Ada 83 mode). If an Ada 2005 program is compiled in Ada 95 mode, uses of the new Ada 2005 features will cause error messages or warnings.
This switch also can be used to cancel the effect of a previous
-gnat83,
-gnat05/2005, or
-gnat12/2012switch earlier in the command line.
-gnat05or
-gnat2005(Ada 2005 mode)
This switch directs the compiler to implement the Ada 2005 version of the language, as documented in the official Ada standards document. Since Ada 2005 is almost completely upwards compatible with Ada 95 (and thus also with Ada 83), Ada 83 and Ada 95 programs may generally be compiled using this switch (see the description of the
-gnat83and
-gnat95switches for further information).
-gnat12or
-gnat2012(Ada 2012 mode)
This switch directs the compiler to implement the Ada 2012 version of the language (also the default). Since Ada 2012 is almost completely upwards compatible with Ada 2005 (and thus also with Ada 83, and Ada 95), Ada 83 and Ada 95 programs may generally be compiled using this switch (see the description of the
-gnat83,
-gnat95, and
-gnat05/2005switches for further information).
-gnat2022(Ada 2022 mode)
This switch directs the compiler to implement the Ada 2022 version of the language.
-gnatX(Enable GNAT Extensions)
This switch directs the compiler to implement the latest version of the language (currently Ada 2022) and also to enable certain GNAT implementation extensions that are not part of any Ada standard. For a full list of these extensions, see the GNAT reference manual,
Pragma Extensions_Allowed.
4.3.11. Character Set Control¶
-gnatic
Normally GNAT recognizes the Latin-1 character set in source program identifiers, as described in the Ada Reference Manual. This switch causes GNAT to recognize alternate character sets in identifiers.
cis a single character indicating the character set, as follows:
See Foreign Language Representation for full details on the implementation of these character sets.
-gnatWe
Specify the method of encoding for wide characters.
eis one of the following:
For full details on these encoding methods see Wide_Character Encodings. Note that brackets coding is always accepted, even if one of the other options is specified, so for example
-gnatW8specifies.
4.3.12. File Naming Control¶
-gnatkn
Activates file name ‘krunching’.
n, a decimal integer in the range 1-999, indicates the maximum allowable length of a file name (not including the
.adsor
.adbextension). The default is not to enable file name krunching.
For the source file naming rules, File Naming Rules.
4.3.13. Subprogram Inlining Control¶
-gnatn[12]
The
nhere is intended to suggest the first syllable of the word ‘inline’. GNAT recognizes and processes
Inlinepragmas. However, for inlining to actually occur, optimization must be enabled and, by default, inlining of subprograms across units is not performed. If you want to additionally enable inlining of subprograms specified by pragma
Inlineacross units, you must also specify this switch.
In the absence of this switch, GNAT does not attempt inlining across units and does not access the bodies of subprograms for which
pragma Inlineis specified if they are not in the current unit.
You can optionally specify the inlining level: 1 for moderate inlining across units, which is a good compromise between compilation times and performances at run time, or 2 for full inlining across units, which may bring about longer compilation times. If no inlining level is specified, the compiler will pick it based on the optimization level: 1 for
-O1,
-O2or
-Osand 2 for
-O3.
If you specify this switch the compiler will access these bodies, creating an extra source dependency for the resulting object file, and where possible, the call will be inlined. For further details on when inlining is possible see Inlining of Subprograms.
-gnatN
This switch activates front-end inlining which also generates additional dependencies.
When using a gcc-based back end, then the use of
-gnatNis deprecated, and the use of
-gnatnis preferred. Historically front end inlining was more extensive than the gcc back end inlining, but that is no longer the case.
4.3.14. Auxiliary Output Control¶
-gnatu
Print a list of units required by this compilation on
stdout. The listing includes all units on which the unit being compiled depends either directly or indirectly.
-pass-exit-codes
If this switch is not used, the exit code returned by
gccwhen compiling multiple files indicates whether all source files have been successfully used to generate object files or not.
When
-pass-exit-codesis used,
gccexits with an extended exit status and allows an integrated development environment to better react to a compilation failure. Those exit status are:
4.3.15. Debugging Control¶
-gnatdx
Activate internal debugging switches.
xis a letter or digit, or string of letters or digits, which specifies the type of debugging outputs desired. Normally these are used only for internal development or system debugging purposes. You can find full documentation for these switches in the body of the
Debugunit in the compiler source file
debug.adb.
-gnatG[=nn]
This switch causes the compiler to generate auxiliary output containing a pseudo-source listing of the generated expanded code. Like most Ada compilers, GNAT works by first transforming the high level Ada code into lower level constructs. For example, tasking operations are transformed into calls to the tasking run-time routines. A unique capability of GNAT is to list this expanded code in a form very close to normal Ada source. This is very useful in understanding the implications of various Ada usage on the efficiency of the generated code. There are many cases in Ada (e.g., the use of controlled types), where simple Ada statements can generate a lot of run-time code. By using
-gnatGyou can identify these cases, and consider whether it may be desirable to modify the coding approach to improve efficiency.
The optional parameter
nnif present after -gnatG specifies an alternative maximum line length that overrides the normal default of 72. This value is in the range 40-999999, values less than 40 being silently reset to 40. The equal sign is optional. spec of package
Sprintin file
sprint.adsfor a full list.
If the switch
-gnatLis used in conjunction with
-gnatG, then the original source lines are interspersed in the expanded source (as comment lines with the original line number).
new xxx [storage_pool = yyy]
Shows the storage pool being used for an allocator.
at end procedure-name;
Shows the finalization (cleanup) procedure for a scope.
(if expr then expr else expr)
Conditional expression equivalent to the
x?y:zconstruction in C.
target^(source)
A conversion with floating-point truncation instead of rounding.
target?(source)
A conversion that bypasses normal Ada semantic checking. In particular enumeration types and fixed-point types are treated simply as integers.
target?^(source)
Combines the above two cases.
x #/ y
x #mod y
x # y
x #rem y
A division or multiplication of fixed-point values which are treated as integers without any kind of scaling.
free expr [storage_pool = xxx]
Shows the storage pool associated with a
freestatement.
[subtype or type declaration]
Used to list an equivalent declaration for an internally generated type that is referenced elsewhere in the listing.
freeze type-name [actions]
Shows the point at which
type-nameis frozen, with possible associated actions to be performed at the freeze point.
reference itype
Reference (and hence definition) to internal type
itype.
function-name! (arg, arg, arg)
Intrinsic function call.
label-name : label
Declaration of label
labelname.
#$ subprogram-name
An implicit call to a run-time support routine (to meet the requirement of H.3.1(9) in a convenient manner).
expr && expr && expr ... && expr
A multiple concatenation (same effect as
expr&
expr&
expr, but handled more efficiently).
[constraint_error]
Raise the
Constraint_Errorexception.
expression'reference
A pointer to the result of evaluating {expression}.
target-type!(source-expression)
An unchecked conversion of
source-expressionto
target-type.
[numerator/denominator]
Used to represent internal real literals (that) have no exact representation in base 2-16 (for example, the result of compile time evaluation of the expression 1.0/27.0).
-gnatD[=nn]
When used in conjunction with
-gnatG, this switch causes the expanded source, as described above for
-gnatGto be written to files with names
xxx.dg, where
xxxis the normal file name, instead of to the standard output file. For example, if the source file name is
hello.adb, then a file
hello.adb.dgwill be written. The debugging information generated by the
gcc
-gswitch will refer to the generated
xxx.dgfile. This allows you to do source level debugging using the generated code which is sometimes useful for complex code, for example to find out exactly which part of a complex construction raised an exception. This switch also suppresses generation of cross-reference information (see
-gnatx) since otherwise the cross-reference information would refer to the
.dgfile, which would cause confusion since this is not the original source file.
Note that
-gnatDactually implies
-gnatGautomatically, so it is not necessary to give both options. In other words
-gnatDis equivalent to
-gnatDG).
If the switch
-gnatLis used in conjunction with
-gnatDG, then the original source lines are interspersed in the expanded source (as comment lines with the original line number).
The optional parameter
nnif present after -gnatD specifies an alternative maximum line length that overrides the normal default of 72. This value is in the range 40-999999, values less than 40 being silently reset to 40. The equal sign is optional.
-gnatr
This switch causes pragma Restrictions to be treated as Restriction_Warnings so that violation of restrictions causes warnings rather than illegalities. This is useful during the development process when new restrictions are added or investigated. The switch also causes pragma Profile to be treated as Profile_Warnings, and pragma Restricted_Run_Time and pragma Ravenscar set restriction warnings rather than restrictions.
-gnatR[0|1|2|3|4][e][j][m][s]
This switch controls output from the compiler of a listing showing representation information for declared types, objects and subprograms. For
-gnatR0, no information is output (equivalent to omitting the
-gnatRswitch). For
-gnatR1(which is the default, so
-gnatRwith no parameter has the same effect), size and alignment information is listed for declared array and record types.
For
-gnatR2, size and alignment information is listed for all declared types and objects. The
Linker_Sectionis also listed for any entity for which the
Linker_Sectionis set explicitly or implicitly (the latter case occurs for objects of a type for which a
Linker_Sectionis set).
For
-gnatR3, symbolic expressions for values that are computed at run time for records are included. These symbolic expressions have a mostly obvious format with #n being used to represent the value of the n’th discriminant. See source files
repinfo.ads/adbin the GNAT sources for full details on the format of
-gnatR3output.
For
-gnatR4, information for relevant compiler-generated types is also listed, i.e. when they are structurally part of other declared types and objects.
If the switch is followed by an
e(e.g.
-gnatR2e), then extended representation information for record sub-components of records is included.
If the switch is followed by an
m(e.g.
-gnatRm), then subprogram conventions and parameter passing mechanisms for all the subprograms are included.
If the switch is followed by a
j(e.g.,
-gnatRj), then the output is in the JSON data interchange format specified by the ECMA-404 standard. The semantic description of this JSON output is available in the specification of the Repinfo unit present in the compiler sources.
If the switch is followed by an
s(e.g.,
-gnatR3s), then the output is to a file with the name
file.repwhere
fileis the name of the corresponding source file, except if
jis also specified, in which case the file name is
file.json.
Note that it is possible for record components to have zero size. In this case, the component clause uses an obvious extension of permitted Ada syntax, for example
at 0 range 0 .. -1.
-gnatS
The use of the switch
-gnatSfor an Ada compilation will cause the compiler to output a representation of package Standard in a form very close to standard Ada. It is not quite possible to do this entirely in standard Ada (since new numeric base types cannot be created in standard Ada), but the output is easily readable to any Ada programmer, and is useful to determine the characteristics of target dependent types in package Standard.
-gnatx
Normally the compiler generates full cross-referencing information in the
ALIfile. This information is used by a number of tools, including
gnatfindand
gnatxref. The
-gnatxswitch suppresses this information. This saves some space and may slightly speed up compilation, but means that these tools cannot be used.
-fgnat-encodings=[all|gdb|minimal]
This switch controls the balance between GNAT encodings and standard DWARF emitted in the debug information.
Historically, old debug formats like stabs were not powerful enough to express some Ada types (for instance, variant records or fixed-point types). To work around this, GNAT introduced proprietary encodings that embed the missing information (“GNAT encodings”).
Recent versions of the DWARF debug information format are now able to correctly describe most of these Ada constructs (“standard DWARF”). As third-party tools started to use this format, GNAT has been enhanced to generate it. However, most tools (including GDB) are still relying on GNAT encodings.
To support all tools, GNAT needs to be versatile about the balance between generation of GNAT encodings and standard DWARF. This is what
-fgnat-encodingsis about.
=all: Emit all GNAT encodings, and then emit as much standard DWARF as possible so it does not conflict with GNAT encodings.
=gdb: Emit as much standard DWARF as possible as long as the current GDB handles it. Emit GNAT encodings for the rest.
=minimal: Emit as much standard DWARF as possible and emit GNAT encodings for the rest.
4.3.16. Exception Handling Control¶
GNAT uses two methods for handling exceptions at run time. The
setjmp ‘zero..
--RTS=sjlj
This switch causes the setjmp/longjmp run-time (when available) to be used for exception handling. If the default mechanism for the target is zero cost exceptions, then this switch can be used to modify this default, and must be used for all units in the partition. This option is rarely used. One case in which it may be advantageous is if you have an application where exception raising is common and the overall performance of the application is improved by favoring exception propagation.
--RTS=zcx
This switch causes the zero cost approach to be used for exception handling. If this is the default mechanism for the target (see below), then this switch is unneeded. If the default mechanism for the target is setjmp/longjmp exceptions, then this switch can be used to modify this default, and must be used for all units in the partition. This option can only be used if the zero cost approach is available for the target in use, otherwise it will generate an error.
The same option
--RTS must be used both for
gcc
and
gnatbind. Passing this option to
gnatmake
(Switches for gnatmake) will ensure the required consistency
through the compilation and binding steps.
4.3.17. Units to Sources Mapping Files¶
-gnatem=path
A mapping file is a way to communicate to the compiler two mappings: from unit names to file names (without any directory information) and from file names to path names (with full directory information). These mappings are used by the compiler to short-circuit the path search.switch is not a switch that you would use explicitly. It is intended primarily for use by automatic tools such as
gnatmakerunning
%sappended for specs and
%bappended for bodies; the second line is the file name; and the third line is the path name.
Example:
main%b main.2.ada /gnat/project1/sources/main.2.ada
When the switch
-gnatemis specified, the compiler will create in memory the two mappings from the specified file. If there is any problem (nonexistent file, truncated file or duplicate entries), no mapping will be created.
Several
-gnatemswitches may be specified; however, only the last one on the command line will be taken into account.
When using a project file,
gnatmakecreates a temporary mapping file and communicates it to the compiler using this switch.
4.3.18. Code Generation Control¶ technology is tested and qualified without any
-m switches,
so generally the most reliable approach is to avoid the use of these
switches. However, we generally expect most of these switches to work
successfully with GN.
4.4. Linker Switches¶
Linker switches can be specified after
-largs builder switch.
-fuse-ld=name
Linker to be used. The default is
bfdfor
ld.bfd, the alternative being
goldfor
ld.gold. The later is a more recent and faster linker, but only available on GNU/Linux platforms.
4.5. Binding with
gnatbind¶
This chapter describes the GNAT binder,
gnatbind, which is used
to bind compiled GNAT objects.
The
gnatbind program performs four separate functions:
Checks that a program is consistent, in accordance with the rules in Chapter 10 of the Ada Reference Manual. In particular, error messages are generated if a program uses inconsistent versions of a given unit.
Checks that an acceptable order of elaboration exists for the program and issues an error message if it cannot find an order of elaboration that satisfies the rules in Chapter 10 of the Ada Language Manual.
Generates a main program incorporating the given elaboration order. This program is a small Ada package (body and spec) that must be subsequently compiled using the GNAT compiler. The necessary compilation step is usually performed automatically by
gnatlink. The two most important functions of this program are to call the elaboration routines of units in an appropriate order and to call the main program.
Determines the set of object files required by the given main program. This information is output in the forms of comments in the generated program, to be read by the
gnatlinkutility used to link the Ada application.
4.5.1. Running
gnatbind¶
The form of the
gnatbind command is
$ gnatbind [ switches ] mainprog[.ali] [ switches ]
where
mainprog.adb is the Ada file containing the main program
unit body.:
Enter
gcc -c hello.adbto compile the main program.
Enter
gcc -c p.adsto compile package
P.
Edit file
p.ads.
Enter.
4.5.2. Switches for
gnatbind¶
The following switches are available with
gnatbind; details will
be presented in subsequent sections.
--version
Display Copyright and version, then exit disregarding all other options.
If
--versionwas not used, display usage, then exit disregarding all other options.
-a
Indicates that, if supported by the platform, the adainit procedure should be treated as an initialisation routine by the linker (a constructor). This is intended to be used by the Project Manager to automatically initialize shared Stand-Alone Libraries.
-aO
Specify directory to be searched for ALI files.
-aI
Specify directory to be searched for source file.
-A[=filename]
Output ALI list (to standard output or to the named file).
-b
Generate brief messages to
stderreven if verbose mode set.
-c
Check only, no generation of binder output file.
-dnn[k|m]
This switch can be used to change the default task stack size value to a specified size
nn, which is expressed in bytes by default, or in kilobytes when suffixed with
kor in megabytes when suffixed with
m. In the absence of a
[k|m]suffix, this switch is equivalent, in effect, to completing all task specs with
pragma Storage_Size (nn);
When they do not already have such a pragma.
-Dnn[k|m]
Set the default secondary stack size to
nn. The suffix indicates whether the size is in bytes (no suffix), kilobytes (
ksuffix) or megabytes (
msuffix).
The secondary stack holds objects of unconstrained types that are returned by functions, for example unconstrained Strings. The size of the secondary stack can be dynamic or fixed depending on the target.
For most targets, the secondary stack grows on demand and is implemented as a chain of blocks in the heap. In this case, the default secondary stack size determines the initial size of the secondary stack for each task and the smallest amount the secondary stack can grow by.
For Ravenscar, ZFP, and Cert run-times the size of the secondary stack is fixed. This switch can be used to change the default size of these stacks. The default secondary stack size can be overridden on a per-task basis if individual tasks have different secondary stack requirements. This is achieved through the Secondary_Stack_Size aspect that takes the size of the secondary stack in bytes.
-e
Output complete list of elaboration-order dependencies.
-Ea
Store tracebacks in exception occurrences when the target supports it. The “a” is for “address”; tracebacks will contain hexadecimal addresses, unless symbolic tracebacks are enabled.
See also the packages
GNAT.Tracebackand
GNAT.Traceback.Symbolicfor more information. Note that on x86 ports, you must not use
-fomit-frame-pointer
gccoption.
-Es
Store tracebacks in exception occurrences when the target supports it. The “s” is for “symbolic”; symbolic tracebacks are enabled.
-E
Currently the same as
-Ea.
-felab-order
Force elaboration order. For further details see Elaboration Control and Elaboration Order Handling in GNAT.
-F
Force the checks of elaboration flags.
gnatbinddoes not normally generate checks of elaboration flags for the main executable, except when a Stand-Alone Library is used. However, there are cases when this cannot be detected by gnatbind. An example is importing an interface of a Stand-Alone Library through a pragma Import and only specifying through a linker switch this Stand-Alone Library. This switch is used to guarantee that elaboration flag checks are generated.
-h
Output usage (help) information.
-H
Legacy elaboration order model enabled. For further details see Elaboration Order Handling in GNAT.
-H32
Use 32-bit allocations for
__gnat_malloc(and thus for access types). For further details see Dynamic Allocation Control.
-H64
Use 64-bit allocations for
__gnat_malloc(and thus for access types). For further details see Dynamic Allocation Control.
-I
Specify directory to be searched for source and ALI files.
-I-
Do not look for sources in the current directory where
gnatbindwas invoked, and do not look for ALI files in the directory containing the ALI file named in the
gnatbindcommand line.
-l
Output chosen elaboration order.
-Lxxx
Bind the units for library building. In this case the
adainitand
adafinalprocedures (Binding with Non-Ada Main Programs) are renamed to
xxxinitand
xxxfinal. Implies -n. (GNAT and Libraries, for more details.)
-Mxyz
Rename generated main program from main to xyz. This option is supported on cross environments only.
-mn
Limit number of detected errors or warnings to
n, where
nis in the range 1..999999. The default value if no switch is given is 9999. If the number of warnings reaches this limit, then a message is output and further warnings are suppressed, the bind continues in this case. If the number of errors reaches this limit, then a message is output and the bind is abandoned. A value of zero means that no limit is enforced. The equal sign is optional.
-minimal
Generate a binder file suitable for space-constrained applications. When active, binder-generated objects not required for program operation are no longer generated. Warning: this option comes with the following limitations:
Starting the program’s execution in the debugger will cause it to stop at the start of the
mainfunction instead of the main subprogram. This can be worked around by manually inserting a breakpoint on that subprogram and resuming the program’s execution until reaching that breakpoint.
Programs using GNAT.Compiler_Version will not link.
-n
No main program.
-nostdinc
Do not look for sources in the system default directory.
-nostdlib
Do not look for library files in the system default directory.
--RTS=rts-path
Specifies the default location of the run-time library. Same meaning as the equivalent
gnatmakeflag (Switches for gnatmake).
-o file
Name the output file
file(default is
b~`xxx.adb`). Note that if this option is used, then linking must be done manually, gnatlink cannot be used.
-O[=filename]
Output object list (to standard output or to the named file).
-p
Pessimistic (worst-case) elaboration order.
-P
Generate binder file suitable for CodePeer.
-R
Output closure source list, which includes all non-run-time units that are included in the bind.
-Ra
Like
-Rbut the list includes run-time units.
-s
Require all source files to be present.
-Sxxx
Specifies the value to be used when detecting uninitialized scalar objects with pragma Initialize_Scalars. The
xxxstring specified with the switch is one of:
infor an invalid one bits. For floating-point types, a NaN value is set (see body of package System.Scalar_Values for exact values).
lofor low zero bits. For floating-point, a small value is set (see body of package System.Scalar_Values for exact values).
hifor high value.
If zero is invalid for the discrete type in question, then the scalar value is set to all one bits. For signed discrete types, the largest possible positive value of the underlying scalar is set (i.e. a zero bit followed by all one bits). For unsigned discrete types, the underlying scalar value is set to all one bits. For floating-point, a large value is set (see body of package System.Scalar_Values for exact values).
xxfor hex value (two hex digits).
The underlying scalar is set to a value consisting of repeated bytes, whose value corresponds to the given value. For example if
BFis given, then a 32-bit scalar value will be set to the bit patterm
16#BFBFBFBF#.
In addition, you can specify
-Sevto indicate that the value is to be set at run time. In this case, the program will look for an environment variable of the form
GNAT_INIT_SCALARS=yy, where
yyis one of
in/lo/hi/xxwith the same meanings as above. If no environment variable is found, or if it does not have a valid value, then the default is
in(invalid values).
-static
Link against a static GNAT run-time.
-shared
Link against a shared GNAT run-time when available.
-t
Tolerate time stamp and other consistency errors.
-Tn
Set the time slice value to
nmilliseconds. If the system supports the specification of a specific time slice value, then the indicated value is used. If the system does not support specific time slice values, but does support some general notion of round-robin scheduling, then any nonzero value will activate round-robin scheduling..
-un
Enable dynamic stack usage, with
nresults stored and displayed at program termination. A result is generated when a task terminates. Results that can’t be stored are displayed on the fly, at task termination. This option is currently not supported on Itanium platforms. (See Dynamic Stack Usage Analysis for details.)
-v
Verbose mode. Write error messages, header, summary output to
stdout.
-Vkey=value
Store the given association of
keyto
valuein the bind environment. Values stored this way can be retrieved at run time using
GNAT.Bind_Environment.
-wx
Warning mode;
x= s/e for suppress/treat as error.
-Wxe
Override default wide character encoding for standard Text_IO files.
-x
Exclude source files (check object consistency only).
-xdr
Use the target-independent XDR protocol for stream oriented attributes instead of the default implementation which is based on direct binary representations and is therefore target-and endianness-dependent. However it does not support 128-bit integer types and the exception
Ada.IO_Exceptions.Device_Erroris raised if any attempt is made at streaming 128-bit integer types with it.
-Xnnn
Set default exit status value, normally 0 for POSIX compliance.
-y
Enable leap seconds support in
Ada.Calendarand its children.
-z
No main subprogram.
You may obtain this listing of switches by running
gnatbind with
no arguments.
4.5.2.1. Consistency-Checking Modes¶xe
Override default wide character encoding for standard Text_IO files. Normally the default wide character encoding method used for standard [Wide_[Wide_]]Text_IO files is taken from the encoding specified for the main source input (see description of switch
-gnatWxforbecause in this case the checking against sources has already been performed by
gnatmakein the course of compilation (i.e., before binding).
4.5.2.2. Binder Error Message Control¶
The following switches provide control over the generation of error messages from the binder:
-v
Verbose mode. In the normal mode, brief error messages are generated to
stderr. If this switch is present, a header is written to
stdoutand any error messages are directed to
stdout. All that is written to
stderris a brief summary message.
-b
Generate brief error messages to
stderreven if verbose mode is specified. This is relevant only when used with the
-vswitch.
-mn
Limits the number of error messages to
n, a decimal integer in the range 1-999. The binder terminates immediately if this limit is reached.
-Mxxx
Renames the generated main program from
mainto
xxx. This is useful in the case of some cross-building environments, where the actual main program is separate from the one generated by
gnatbind.
-ws
Suppress all warning messages.
-we
Treat any warning messages as fatal errors.
-t
The binder performs a number of consistency checks including:
Check that time stamps of a given source unit are consistent
Check that checksums of a given source unit are consistent
Check that consistent versions of
GNATwere used for compilation
Check consistency of configuration pragmas as required
Normally failure of such checks, in accordance with the consistency requirements of the Ada Reference Manual, causes error messages to be generated which abort the binder and prevent the output of a binder file and subsequent link to obtain an executable.
The
-tswitch converts these error messages into warnings, so that binding and linking can continue to completion even in the presence of such errors. The result may be a failed link (due to missing symbols), or a non-functional executable which has undefined semantics.
Note
This means that
-tshould be used only in unusual situations, with extreme care.
4.5.2.3. Elaboration Control¶
The following switches provide additional control over the elaboration order. For further details see Elaboration Order Handling in GNAT.
-felab-order
Force elaboration order.
elab-ordershould be the name of a “forced elaboration order file”, that is, a text file containing library item names, one per line. A name of the form “some.unit%s” or “some.unit (spec)” denotes the spec of Some.Unit. A name of the form “some.unit%b” or “some.unit (body)” denotes the body of Some.Unit. Each pair of lines is taken to mean that there is an elaboration dependence of the second line on the first. For example, if the file contains:
this (spec) this (body) that (spec) that (body)
then the spec of This will be elaborated before the body of This, and the body of This will be elaborated before the spec of That, and the spec of That will be elaborated before the body of That. The first and last of these three dependences are already required by Ada rules, so this file is really just forcing the body of This to be elaborated before the spec of That.
The given order must be consistent with Ada rules, or else
gnatbindwill give elaboration cycle errors. For example, if you say x (body) should be elaborated before x (spec), there will be a cycle, because Ada rules require x (spec) to be elaborated before x (body); you can’t have the spec and body both elaborated before each other.
If you later add “with That;” to the body of This, there will be a cycle, in which case you should erase either “this (body)” or “that (spec)” from the above forced elaboration order file.
Blank lines and Ada-style comments are ignored. Unit names that do not exist in the program are ignored. Units in the GNAT predefined library are also ignored.
-p
Pessimistic elaboration order
This switch is only applicable to the pre-20.x legacy elaboration models. The post-20.x elaboration model uses a more informed approach of ordering the units.
Normally the binder attempts to choose an elaboration order that is likely to minimize the likelihood of an elaboration order error resulting in raisingswitch if dynamic elaboration checking is used (
-gnatEswitch used for compilation). This is because in the default static elaboration mode, all necessary
Elaborateand
Elaborate_Allpragmas are implicitly inserted. These implicit pragmas are still respected by the binder in
-pmode, so a safe elaboration order is assured.
Note that
-pis not intended for production use; it is more for debugging/experimental use.
4.5.2.4. Output Control¶
The following switches allow additional control over the output generated by the binder.
-c
Check only. Do not generate the binder output file. In this mode the binder performs all error checks but does not generate an output file.
-e
Output complete list of elaboration-order dependencies, showing the reason for each dependency. This output can be rather extensive but may be useful in diagnosing problems with elaboration order. The output is written to
stdout.
-h
Output usage information. The output is written to
stdout.
-K
Output linker options to
stdout. Includes library search paths, contents of pragmas Ident and Linker_Options, and libraries added by
gnatbind.
-l
Output chosen elaboration order. The output is written to
stdout.
-O
Output full names of all the object files that must be linked to provide the Ada component of the program. The output is written
Set name of output file to
fileinstead of the normal
b~`mainprog.adb` default. Note that
filedenote the Ada binder generated body filename. Note that if this option is used, then linking must be done manually. It is not possible to use gnatlink in this case, since it cannot locate the binder file.
-r
Generate list of
pragma Restrictionsthat could be applied to the current unit. This is useful for code audit purposes, and also may be used to improve code generation in some cases.
4.5.2.5. Dynamic Allocation Control¶
The heap control switches –
-H32 and
-H64 –
determine whether dynamic allocation uses 32-bit or 64-bit memory.
They only affect compiler-generated allocations via
__gnat_malloc;
explicit calls to
malloc and related functions from the C
run-time library are unaffected.
-H32
Allocate memory on 32-bit heap
-H64
Allocate memory on 64-bit heap. This is the default unless explicitly overridden by a
'Sizeclause on the access type.
These switches are only effective on VMS platforms.
4.5.2.6. Binding with Non-Ada Main Programs¶
The description so far (Mixed Language Programming).
The following switch is used in this situation:
-n
No main program. The main program is not in Ada.
In this case, most of the functions of the binder are still required, but instead of generating a main program, the binder generates a file containing the following callable routines:
adainit
-
You must call this routine to initialize the Ada part of the program by calling the necessary elaboration routines. A call
-
You must call this routine to perform any library-level finalization required by the Ada subprograms. A call to.
4.5.2.7. Binding Programs with No Main Subprogram¶
It is possible to have an Ada program which does not have a main subprogram. This program will call the elaboration routines of all the packages, then the finalization routines.
The following switch is used to bind programs organized in this manner:
-z
Normally the binder checks that the unit name given on the command line corresponds to a suitable main subprogram. When this switch is used, a list of ALI files can be given, and the execution of the program consists of elaboration of these units in an appropriate order. Note that the default wide character encoding method for standard Text_IO files is always set to Brackets if this switch is set (you can use the binder switch
-Wxto override this default).
4.5.3. Command-Line Access¶.
4.5.4. Search Paths for
gnatbind¶:
The directory containing the ALI file named in the command line, unless the switch
-I-is specified.
All directories specified by
-Iswitches on the
gnatbindcommand line, in the order given.
Each of the directories listed in the text file whose name is given by the
ADA_PRJ_OBJECTS_FILEenvironment variable.
ADA_PRJ_OBJECTS_FILEis normally set by gnatmake or by the gnat driver when project files are used. It should not normally be set by other means.
Each of the directories listed in the value of the
ADA_OBJECTS_PATHenvironment variable. Construct this value exactly as the
PATHenvironment variable: a list of directory names separated by colons (semicolons when working with the NT version of GNAT).
The content of the
ada_object_pathfile which is part of the GNAT installation tree and is used to store standard libraries such as the GNAT Run-Time Library (RTL) unless the switch
-nostdlibis specified. See Installing a libraryO`dir..
4.5.5. Examples of
gnatbind Usage¶
Here are some examples of
gnatbind invovations:
gnatbind hello
The main program
Hello(source program in
hello.adb) is bound using the standard switch settings. The generated main program is
b~hello.adb. This is the normal, default use of the binder.gnatbind hello -o mainprog.adb
The main program
Hello(source program in
hello.adb) is bound using the standard switch settings. The generated main program is
mainprog.adbwith the associated spec in
mainprog.ads. Note that you must specify the body here not the spec. Note that if this option is used, then linking must be done manually, since gnatlink will not be able to find the generated file.
4.6. Linking with
gnatlink¶.
4.6.1. Running
gnatlink¶.
One useful option for the linker is
-s: it reduces the size of the
executable by removing all symbol table and relocation information from the
executable..
4.6.2. Switches for
gnatlink¶
The following switches are available with the
gnatlink utility:
--version
Display Copyright and version, then exit disregarding all other options.
If
--versionwas not used, display usage, then exit disregarding all other options.
-f
On some targets, the command line length is limited, and
gnatlinkwill generate a separate file for the linker if the list of object files is too long. The
-fswitch forces this file to be generated even if the limit is not exceeded. This is useful in some cases to deal with special situations where the command line length is exceeded.
-g
The option to include debugging information causes the Ada bind file (in other words,
b~mainprog.adb) to be compiled with
-g. In addition, the binder does not delete the
b~mainprog.adb,
b~mainprog.oand
b~mainprog.alifiles. Without
-g, the binder removes these files by default.
-n
Do not compile the file generated by the binder. This may be used when a link is rerun with different options, but there is no need to recompile the binder file.
-v
Verbose mode. Causes additional information to be output, including a full list of the included object files. This switch option is most useful when you want to see what set of object files are being used in the link step.
-v -v
Very verbose mode. Requests that the compiler operate in verbose mode when it compiles the binder file, and that the system linker run in verbose mode.
-o exec-name
exec-namespecifies an alternate name for the generated executable program. If this switch is omitted, the executable has the same name as the main unit. For example,
gnatlink try.alicreates an executable called
try.
-Bdir
Load compiler executables (for example,
gnat1, the Ada compiler) from
dirinstead of the default location. Only use this switch when multiple versions of the GNAT compiler are available. See the
Directory Optionssection in The_GNU_Compiler_Collection for further details. You would normally use the
-bor
-Vswitch instead.
-M
When linking an executable, create a map file. The name of the map file has the same name as the executable with extension “.map”.
-M=mapfile
When linking an executable, create a map file. The name of the map file is
mapfile.
--GCC=compiler_name
Program used for compiling the binder file. The default is
gcc. You need to use quotes around
compiler_nameifare used, only the last
compiler_nameis taken into account. However, all the additional switches are also taken into account. Thus,
--GCC="foo -x -y" --GCC="bar -z -t"is equivalent to
--GCC="bar -x -y -z -t".
--LINK=name
nameis the name of the linker to be invoked. This is especially useful in mixed language programs since languages such as C++ require their own linker to be used. When this switch is omitted, the default name for the linker is
gcc. When this switch is used, the specified linker is called instead of
gccwith exactly the same parameters that would have been passed to
gccso if the desired linker requires different parameters it is necessary to use a wrapper script that massages the parameters before invoking the real linker. It may be useful to control the exact invocation by using the verbose switch.
4.7. Using the GNU
make Utility¶.
4.7.1. Using gnatmake in a Makefile¶}
4.7.2. Automatically Creating a List of Directories¶, etc.`` and ``sort`` functions. #}
4.7.3. Generating the Command Line Switches¶
Once you have created the list of directories as explained in the previous section
4.7.4. Overcoming Command Line Length Limits¶_OBJECTS_PATH. # This is the same thing as putting the -I arguments on the command line. # (the equivalent of using -aI on the command line would be to define # only ADA_INCLUDE_PATH, the equivalent of -aO is ADA_OBJECTS_OBJECTS_PATH += ${OBJECT_LIST} export ADA_INCLUDE_PATH export ADA_OBJECTS_PATH all: gnatmake main_unit | https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ugn/building_executable_programs_with_gnat.html | 2021-11-27T04:40:36 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.adacore.com |
ldand MMIX
ldand MSP430
ldand Xtensa Processors
This file documents the gnu linker ld (GNU Binutils) version 2.36.50.
This document is distributed under the terms of the GNU Free Documentation License version 1.3. ‘-l’ option below.)
Some of the command-line options to ld may be specified at any point in the command line. However, options which refer to files, such as ‘-l’ or ‘ ‘-l’, ‘-R’, and the script command language. If no binary input files at all are specified, the linker does not produce any output, and issues the message ‘No ‘ ‘-T’ option
to replace the default linker script entirely, but note the effect of
the
INSERT command., ‘-trace-symbol’ and ‘--trace-symbol’ are equivalent. Note—there is one exception to this rule. Multiple letter options that start with a lower case 'o' can only be preceded by two dashes. This is to reduce confusion with the ‘-o’ option. So for example ‘-omagic’ sets the output file name to ‘magic’ whereas ‘--omagic’ sets the NMAGIC flag on the output.
Arguments to multiple-letter options must either be separated from the option name by an equals sign, or be given as separate arguments immediately following the option that requires them. For example, ‘--trace-symbol foo’ and ‘--trace-symbol=foo’ are equivalent. Unique abbreviations of the names of multiple-letter options are accepted.
Note—if the linker is being invoked indirectly, via a compiler driver (e.g. ‘gcc’) then all the linker command-line options should be prefixed by ‘keyword
--auditAUDITLIB
DT_AUDITentry of the dynamic section. AUDITLIB is not checked for existence, nor will it use the DT_SONAME specified in the library. If specified multiple times
DT_AUDITwill contain a colon separated list of audit interfaces to use. If the linker finds an object with an audit entry while searching for shared libraries, it will add a corresponding
DT_DEPAUDITentry in the output file. This option is only meaningful on ELF platforms supporting the rtld-audit interface.
-binput-format
--format=input-format
You may want to use this option if you are linking files with an unusual binary format. You can also use ‘-b’ to switch formats explicitly (when linking object files of different formats), by including ‘.
--depauditAUDITLIB
-PAUDITLIB
DT_DEPAUDITentry of the dynamic section. AUDITLIB is not checked for existence, nor will it use the DT_SONAME specified in the library. If specified multiple times
DT_DEPAUDITwill contain a colon separated list of audit interfaces to use. This option is only meaningful on ELF platforms supporting the rtld-audit interface. The -P option is provided for Solaris compatibility.
--enable-non-contiguous-regions
MEMORY { MEM1 (rwx) : ORIGIN : 0x1000, LENGTH = 0x14 MEM2 (rwx) : ORIGIN : 0x1000, LENGTH = 0x40 MEM3 (rwx) : ORIGIN : 0x2000, LENGTH = 0x40 } SECTIONS { mem1 : { *(.data.*); } > MEM1 mem2 : { *(.data.*); } > MEM2 mem3 : { *(.data.*); } > MEM2 } with input sections: .data.1: size 8 .data.2: size 0x10 .data.3: size 4 results in .data.1 affected to mem1, and .data.2 and .data.3 affected to mem2, even though .data.3 would fit in mem3.
This option is incompatible with INSERT statements because it changes the way input sections are mapped to output sections.
--enable-non-contiguous-regions-warnings
--enable-non-contiguous-regionsallows possibly unexpected matches in sections mapping, potentially leading to silently discarding a section instead of failing because it does not fit any output region.
module
,module
,...
-E
--export-dynamic
--no ‘--dynamic-list’.
Note that this option is specific to ELF targeted ports. PE targets support a similar function to export all symbols from a DLL or EXE; see the description of ‘--export-all-symbols’ below.
--export-dynamic-symbol=glob
--export-dynamic-symbol-list=file
-EB
-EL
-fname
-namespec
--library=namespec
= or
$SYSROOT, then this
prefix will be replaced by the sysroot prefix, controlled by the
‘--sysroot’ option, or specified when the linker is configured.
The default set of paths searched (without being specified with ‘ ‘.
When the linker merges input .note.gnu.property sections into one output .note.gnu.property section, some properties are removed or updated. These actions are reported in the link map. For example:
Removed property 0xc0000002 to merge foo.o (0x1) and bar.o (not found)
This indicates that property 0xc0000002 is removed from output when merging properties in foo.o, whose property 0xc0000002 value is 0x1, and bar.o, which doesn't have property 0xc0000002.
Updated property 0xc0010001 (0x1) to merge foo.o (0x1) and bar.o (0x1)
This indicates that property 0xc0010001 value is updated to 0x1 in output when merging properties in foo.o, whose 0xc0010001 property value is 0x1, and bar.o, whose 0xc0010001 property value is 0x1.
--print-map-discarded
--no-print-map-discarded
-n
--nmagic
NMAGIC.
-N
--omagic
OMAGIC. Note: Although a writable text section is allowed for PE-COFF targets, it does not conform to the format specification published by Microsoft.
--no-omagic
-ooutput
--output=output
OUTPUTcan also specify the output file name.
--dependency-file=depfile
makedescribing the output file and all the input files that were read to produce it. The output is similar to the compiler's output with ‘-M -MP’ (see Options Controlling the Preprocessor). Note that there is no option like the compiler's ‘-MM’, to exclude “system files” (which is not a well-specified concept in the linker, unlike “system headers” in the compiler). So the output from ‘--dependency-file’ is always specific to the exact state of the installation where it was produced, and should not be copied into distributed makefiles without careful editing.
-Olevel
-pluginname
Note that the location of the compiler originated plugins is different from the place where the ar, nm and ranlib programs search for their plugins. In order for those commands to make use of a compiler based plugin it must first be copied into the ${libdir}/bfd-plugins directory. All gcc based linker plugins are backward compatible, so it is sufficient to just copy in the newest one.
--push-state
--pop-state
-q
--emit-relocs
This option is currently only supported on ELF platforms.
--force-dynamic
-r
--relocatable
OMAGIC. If this option is not specified, an absolute file is produced. When linking C++ programs, this option will not resolve references to constructors; to do that, use ‘ ‘-i’.
-Rfilename
--just-symbols=filename
For compatibility with other ELF linkers, if the -R option is followed by a directory name, rather than a file name, it is treated as the -rpath option.
-s
--strip-all
-S
--strip-debug
--strip-discarded
--no-strip-discarded
-t
--trace
-Tscriptfile
--script=scriptfile
ldlooks for it in the directories specified by any preceding ‘-L’ options. Multiple ‘-T’ options accumulate.
-dTscriptfile
--default-script ‘gcc’).
-usymbol
--undefined=symbol
EXTERNlinker script command.
If this option is being used to force additional modules to be pulled into the link, and if it is an error for the symbol to remain undefined, then the option --require-defined should be used instead.
--require-defined=symbol
EXTERN,
ASSERTand
DEFINEDtogether. This option can be used multiple times to require additional symbols.
-Ur
--orphan-handling=MODE
MODE can have any of the following values:
place
discard
warn
placeand also issue a warning.
error
The default if ‘--orphan-handling’ is not given is
place.
--unique[=SECTION
]
-v
--version
-V
-x
--discard-all
-X
--discard-locals
-ysymbol
--trace-symbol=symbol
This option is useful when you have an undefined symbol in your link but don't know where the reference is coming from.
-Ypath
-zkeyword
NOPpadding when transforming indirect call to a locally defined function, foo, via its GOT slot. call-nop=prefix-addr generates
0x67 call foo. call-nop=suffix-nop generates
call foo 0x90. call-nop=prefix-byte generates byte
call foo. call-nop=suffix-byte generates
call foobyte. Supported for i386 and x86_64.
DF_1_GLOBAUDITbit in the
DT_FLAGS_1dynamic tag. Global auditing requires that any auditing library defined via the --depaudit or -P command-line options be run for all dynamic objects loaded by the application.
dlmopen). This is primarily used to mark fundamental libraries such as libc, libpthread et al which do not usually function correctly unless they are the sole instances of themselves. This behaviour can be overridden by the
dlmopencaller and does not apply to certain loading mechanisms (such as audit libraries).
dlopen.
dldump.
PT_GNU_RELROsegment header in the object. This specifies a memory segment that should be made read-only after relocation, if supported. Specifying ‘common-page-size’ smaller than the system page size will render this protection ineffective. Don't create an ELF
PT_GNU_RELROsegment if ‘norelro’.
PT_LOADsegment header in the object. This specifies a memory segment that should contain only instructions and must be in wholly disjoint pages from any other data. Don't create separate code
PT_LOADsegment if ‘noseparate-code’ is used.
number" to duplicated local symbol names if ‘unique-symbol’ is used. nounique-symbol is the default.
PT_GNU_STACKsegment. Specifying zero will override any default non-zero sized
PT_GNU_STACKsegment creation.
__start_SECNAMEand
__stop_SECNAMEsymbols (see Input Section Example). value must be exactly ‘default’, ‘internal’, ‘hidden’, or ‘protected’. If no ‘-z start-stop-visibility’ option is given, ‘protected’ is used for compatibility with historical practice. However, it's highly recommended to use ‘-z start-stop-visibility=hidden’ in new programs and shared libraries so that these symbols are not exported between shared objects, which is not usually what's intended.
GNU_PROPERTY_X86_ISA_1_BASELINE. x86-64-v2 generates
GNU_PROPERTY_X86_ISA_1_V2. x86-64-v3 generates
GNU_PROPERTY_X86_ISA_1_V3. x86-64-v4 generates
GNU_PROPERTY_X86_ISA_1_V4. Supported for Linux/i386 and Linux/x86_64.
-Bsymbolic-functions
--dynamic-list=dynamic-list-file
The format of the dynamic list is the same as the version node without scope and node name. See VERSION for more information.
--dynamic-list-data
--dynamic-list-cpp-new
--dynamic-list-cpp-typeinfo
--check-sections
--no-check-sections
--copy-dt-needed-entries
--no.
--ctf-variables
--no-ctf-variables
--ctf-share-types=method
--no-define-common
INHIBIT_COMMON_ALLOCATIONhas the same effect. See Miscellaneous Commands.
The ‘--no-define-common’ option allows decoupling the decision to assign addresses to Common symbols from the choice of the output file type; otherwise a non-Relocatable output type forces assigning addresses to Common symbols. Using ‘-.
--force-group-allocation
FORCE_GROUP_ALLOCATIONhas the same effect. See Miscellaneous Commands.
--defsym=symbol
=expression
+and
-to add or subtract hexadecimal constants or symbols. If you need more elaborate expressions, consider using the linker command language from a script (see Assignments). Note: there should be no white space between symbol, the equals sign (“<=>”), and expression.
The linker processes ‘--defsym’ arguments and ‘-T’ arguments in order, placing ‘--defsym’ before ‘-T’ will define the symbol before the linker script from ‘-T’ is processed, while placing ‘--defsym’ after ‘-T’ will define the symbol after the linker script has been processed. This difference has consequences for expressions within the linker script that use the ‘--defsym’ symbols, which order is correct will depend on what you are trying to achieve.
--demangle[=style
]
--no-demangle
-Ifile
--dynamic-linker=file
--no-dynamic-linker
--embedded-relocs
--disable-multiple-abs-defs
--fatal-warnings
--no
‘- ‘--entry’, ‘--undefined’, and ‘--gc-keep-exported’.
This option can be set when doing a partial link (enabled with option ‘-r’). In this case the root of symbols kept must be explicitly specified either by one of the options ‘--entry’, ‘--undefined’ or ‘--gc-keep-exported’.
As a GNU extension, ELF input sections marked with the
SHF_GNU_RETAIN flag will not be garbage collected.
--print-gc-sections
--no-print-gc-sections
--gc-keep-exported
--print-output-format
OUTPUT_FORMATlinker script command (see File Commands).
--print-memory-usage
Memory region Used Size Region Size %age Used ROM: 256 KB 1 MB 25.00% RAM: 32 B 2 GB 0.00%
--target-help
-Map=mapfile
-then the map will be written to stdout.
Specifying a directory as mapfile causes the linker map to be
written as a file inside the directory. Normally name of the file
inside the directory is computed as the basename of the output
file with
.map appended. If however the special character
% is used then this will be replaced by the full path of the
output file. Additionally if there are any characters after the
% symbol then
.map will no longer be appended.
-o foo.exe -Map=bar [Creates ./bar] -o ../dir/foo.exe -Map=bar [Creates ./bar] -o foo.exe -Map=../dir [Creates ../dir/foo.exe.map] -o ../dir2/foo.exe -Map=../dir [Creates ../dir/foo.exe.map] -o foo.exe -Map=% [Creates ./foo.exe.map] -o ../dir/foo.exe -Map=% [Creates ../dir/foo.exe.map] -o foo.exe -Map=%.bar [Creates ./foo.exe.bar] -o ../dir/foo.exe -Map=%.bar [Creates ../dir/foo.exe.bar] -o ../dir2/foo.exe -Map=../dir/% [Creates ../dir/../dir2/foo.exe.map] -o ../dir2/foo.exe -Map=../dir/%.bar [Creates ../dir/../dir2/foo.exe.bar]
It is an error to specify more than one
% character.
If the map file already exists then it will be overwritten by this operation.
--no-keep-memory
--no-undefined
-z defs
The effects of this option can be reverted by using
-z undefs.
--allow-multiple-definition
-z muldefs
--allow-shlib-undefined
--no.
--error-handling-script=scriptname
The availability of this option is controlled by a configure time switch, so it may not be present in specific implementations.
--no-undefined-version
--default-symver
--default-imported-symver
--no-warn-mismatch
--no-warn-search-mismatch
--no-whole-archive
--noinhibit-exec
-nostdlib
--oformat=output-format
OUTPUT_FORMATcan also specify the output format, but this option overrides it. See BFD.
--out-implibfile
*.dll.aor
*.afor DLLs) may be used to link clients against the generated executable; this behaviour makes it possible to skip a separate import library creation step (eg.
dlltoolfor DLLs). This option is only available for the i386 PE and ELF targetted ports of the linker.
-pie
--pic-executable
-qmagic
-Qy
--relax
--no the feature is supported, the option --no-relax will disable it.
On platforms where the feature is not supported, both --relax and --no-relax are accepted, but ignored.
--retain-symbols-file=filename
‘--retain-symbols-file’ does not discard undefined symbols, or symbols needed for relocations.
You may only specify ‘--retain-symbols-file’ once in the command
line. It overrides ‘-s’ and ‘-S’.
-rpath=dir
The -rpath option is also used when locating shared objects which are needed by shared objects explicitly included in the link; see the description of the -rpath-link option. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the --with-sysroot.
The tokens $ORIGIN and $LIB can appear in these search directories. They will be replaced by the full path to the directory containing the program or shared object in the case of $ORIGIN and either ‘lib’ - for 32-bit binaries - or ‘lib64’ - for 64-bit binaries - in the case of $LIB.
The alternative form of these tokens - ${ORIGIN} and ${LIB} can also be used. The token $PLATFORM is not supported.or
DT_RPATHof a shared library are searched for shared libraries needed by it. The
DT_RPATHentries are ignored if
DT_RUNPATHentries exist.
sysrootvalue, if that is defined, and then any
prefixstring if the linker was configured with the --prefix=<path> option.
_PATH_ELF_HINTSmacro defined in the elf-hints.h header file.
SEARCH_DIRcommand in the linker script being used.
If the required shared library is not found, the linker will issue a warning and continue with the link.
-shared
-Bshareable
--sort-common
--sort-common=ascending
--sort-common=descending
--sort-section=name
SORT_BY_NAMEto all wildcard section patterns in the linker script.
--sort-section=alignment
SORT_BY_ALIGNMENTto all wildcard section patterns in the linker script.
--spare-dynamic-tags=count
--split-by-file[=size
]
--split-by-reloc[=count
]
--stats
--sysroot=directory
--task-link
- ‘--traditional-format’ switch tells ld to not
combine duplicate entries.
--section-start=sectionname
=org
-Tbss=org
-Tdata=org
-Ttext=org
.bss,
.dataor
.textas the sectionname.
-Ttext-segment=org
-Trodata-segment=org
-Tldata-segment=org
-[=NUMBER
]
--version-script=version-scriptfile
--warn-common
There are three kinds of global symbols, illustrated here by C examples:
The ‘--textrel
--warn-alternate-em
-.
Only undefined references are replaced by the linker. So, translation unit
internal references to symbol are not resolved to
__wrap_symbol. In the next example, the call to
f in
g is not resolved to
__wrap_f.
int f (void) { return 123; } int g (void) { return f(); }
--eh-frame-hdr
--no-eh-frame-hdr
.eh_frame_hdrsection and ELF
PT_GNU_EH_FRAMEsegment header.
--no-ld-generated-unwind-info
.eh_frameunwind info for linker generated code sections like PLT. This option is on by default if linker generated unwind info is supported.
--enable-new-dtags
--disable-new-dtags
--hash-size=number
--hash-style=style
sysvfor classic ELF
.hashsection,
gnufor new style GNU
.gnu.hashsection or
bothfor both the classic ELF
.hashand new style GNU
.gnu.hashhash tables. The default depends upon how the linker was configured, but for most Linux based systems it will be
both.
--compress-debug-sections=none
--compress-debug-sections=zlib
--compress-debug-sections=zlib-gnu
--compress-debug-sections=zlib-gabi
--compress-debug-sections=none doesn't compress DWARF debug sections. --compress-debug-sections=zlib-gnu compresses DWARF debug sections and renames them to begin with ‘.zdebug’ instead of ‘.debug’. --compress-debug-sections=zlib-gabi also compresses DWARF debug sections, but rather than renaming them it sets the SHF_COMPRESSED flag in the sections' headers.
The --compress-debug-sections=zlib option is an alias for --compress-debug-sections=zlib-gabi.
Note that this option overrides any compression in input debug sections, so if a binary is linked with --compress-debug-sections=none for example, then any compressed debug sections in input files will be uncompressed before they are copied into the output binary.
The default compression behaviour varies depending upon the target involved and the configure options used to build the toolchain. The default can be determined by examining the output from the linker's --help option.
--reduce-memory-overheads
.note.gnu.build-idELF note section or a
.buildidCOFF section. The contents of the note are unique bits identifying this linked file. style can be
uuidto use 128 random bits,
sha1to use a 160-bit SHA1 hash on the normative parts of the output contents,
md5to-long-section-names
--disable-long-section-names
-
--no-leading-underscore
-
,...
--exclude-all-symbols
--file-alignment
--heapreserve
--heapreserve
,commit
--image-basevalue
--kill-at
--large-address-aware
--disable]
--enable-auto-image-base
--enable-auto-image-base=value
-, thus making it possible to bypass the dllimport mechanism on the user side and to reference unmangled symbol names. [This option is specific to the i386 PE targeted port of the linker]
The following remarks pertain to the original implementation of the feature and are obsolete nowadays for Cygwin and MinGW targets.]
--high-entropy-va
--disable-high-entropy-va
This option also implies --dynamicbase and --enable-reloc-section.
--dynamicbase
--disable-dynamicbase
--forceinteg
--disable-forceinteg
--nxcompat
--disable-nxcompat
--no-isolation
--disable-no-isolation
--no-seh
--disable-no-seh
--no-bind
--disable-no-bind
--wdmdriver
--disable-wdmdriver
--tsaware
--disable-tsaware
--insert-timestamp
--no-insert-timestamp
--enable-reloc-section
--disable-reloc-section
The C6X uClinux target uses a binary format called DSBT to support shared libraries. Each shared library in the system needs to have a unique index; all executables use an index of 0.
--dsbt-sizesize
--dsbt-indexindex
R_C6000_DSBT_INDEXrelocs are copied into the output file.
The ‘-.
--no-trampoline
jsrinstruction (this happens when a pointer to a far function is taken).
--bank-windowname
The following options are supported to control handling of GOT generation when linking for 68K targets.
--got=type
You can change the behaviour of ld with the environment variables
GNUTARGET,
LDEMULATION and
COLLECT_NO_DEMANGLE.
GNUTARGET determines the input-file object format if you don't
use ‘-b’ (or its synonym ‘-
‘-m’ option. The emulation can affect various aspects of linker
behaviour, particularly the default linker script. You can list the
available emulations with the ‘--verbose’ or ‘-V’ options. If
the ‘ ‘--demangle’ and ‘- ‘--verbose’ command-line option to display the default linker script. Certain command-line options, such as ‘-r’ or ‘-N’, will affect the default linker script.
You may supply your own linker script by using the ‘ ‘ ‘ ‘/*’ and ‘*/’. As in C, comments are syntactically equivalent to whitespace./’ character, and the script being processed was
located inside the sysroot prefix, the filename will be looked
for in the sysroot prefix. The sysroot prefix can also be forced by specifying
= as the first character in the filename path, or prefixing the
filename path with
$SYSROOT. See also the description of
‘-L’ in Command-line Options.
If a sysroot prefix is not used then the linker will try to open the file in the directory containing the linker script. If it is not found the linker will then search the current directory. If it is still not found the linker will search through the archive library search path.
If you use ‘INPUT (-lfile)’, ld will transform the
name to
libfile
.a, as with the command-line argument
‘ ‘-(’ ‘ ‘ ‘--oformat bfdname’ on the command line (see Command-line Options). If both are used, the command line option takes precedence.
You can use
OUTPUT_FORMAT with three arguments to use different
formats based on the ‘-EB’ and ‘-EL’ command-line options.
This permits the linker script to set the output format based on the
desired endianness.
If neither ‘-EB’ nor ‘-EL’ are used, then the output format will be the first argument, default. If ‘-EB’ is used, the output format will be the second argument, big. If ‘
‘elf32-bigmips’, but if the user uses the ‘-EL’ command-line
option, the output file will be created in the ‘elf32-littlemips’
format.
TARGET(bfdname
)
TARGETcommand names the BFD format to use when reading input files. It affects subsequent
INPUTand
GROUPcommands. This command is like using ‘-b bfdname’ on the command line (see Command-line Options). If the
TARGETcommand is used but
OUTPUT_FORMATis not, then the last
TARGETcommand is also used to set the format for the output file. See BFD.
Alias names can be added to existing memory regions created with the MEMORY command. Each name corresponds to at most one memory region.
REGION_ALIAS(alias, region)
The
REGION_ALIAS function creates an alias name alias for the
memory region region. This allows a flexible mapping of output sections
to memory regions. An example follows.
Suppose we have an application for embedded systems which come with various
memory storage devices. All have a general purpose, volatile memory
RAM
that allows code execution or data storage. Some may have a read-only,
non-volatile memory
ROM that allows code execution and read-only data
access. The last variant is a read-only, non-volatile memory
ROM2 with
read-only data access and no code execution capability. We have four output
sections:
.textprogram code;
.rodataread-only data;
.dataread-write initialized data;
.bssread-write zero initialized data.
The goal is to provide a linker command file that contains a system independent
part defining the output sections and a system dependent part mapping the
output sections to the memory regions available on the system. Our embedded
systems come with three different memory setups
A,
B and
C:
RAM/ROMor
RAM/ROM2means that this section is loaded into region
ROMor
ROM2respectively. Please note that the load address of the
.datasection starts in all three variants at the end of the
.rodatasection.
The base linker script that deals with the output sections follows. It
includes the system dependent
linkcmds.memory file that describes the
memory layout:
INCLUDE linkcmds.memory SECTIONS { .text : { *(.text) } > REGION_TEXT .rodata : { *(.rodata) rodata_end = .; } > REGION_RODATA .data : AT (rodata_end) { data_start = .; *(.data) } > REGION_DATA data_size = SIZEOF(.data); data_load_start = LOADADDR(.data); .bss : { *(.bss) } > REGION_BSS }
Now we need three different
linkcmds.memory files to define memory
regions and alias names. The content of
linkcmds.memory for the three
variants
A,
B and
C:
A
RAM.
MEMORY { RAM : ORIGIN = 0, LENGTH = 4M } REGION_ALIAS("REGION_TEXT", RAM); REGION_ALIAS("REGION_RODATA", RAM); REGION_ALIAS("REGION_DATA", RAM); REGION_ALIAS("REGION_BSS", RAM);
B
ROM. Read-write data goes into the
RAM. An image of the initialized data is loaded into the
ROMand will be copied during system start into the
RAM.
MEMORY { ROM : ORIGIN = 0, LENGTH = 3M RAM : ORIGIN = 0x10000000, LENGTH = 1M } REGION_ALIAS("REGION_TEXT", ROM); REGION_ALIAS("REGION_RODATA", ROM); REGION_ALIAS("REGION_DATA", RAM); REGION_ALIAS("REGION_BSS", RAM);
C
ROM. Read-only data goes into the
ROM2. Read-write data goes into the
RAM. An image of the initialized data is loaded into the
ROM2and will be copied during system start into the
RAM.
MEMORY { ROM : ORIGIN = 0, LENGTH = 2M ROM2 : ORIGIN = 0x10000000, LENGTH = 1M RAM : ORIGIN = 0x20000000, LENGTH = 1M } REGION_ALIAS("REGION_TEXT", ROM); REGION_ALIAS("REGION_RODATA", ROM2); REGION_ALIAS("REGION_DATA", RAM); REGION_ALIAS("REGION_BSS", RAM);
It is possible to write a common system initialization routine to copy the
.data section from
ROM or
ROM2 into the
RAM if
necessary:
#include <string.h> extern char data_start []; extern char data_size []; extern char data_load_start []; void copy_data(void) { if (data_start != data_load_start) { memcpy(data_start, data_load_start, (size_t) data_size); } }
There are a few other linker scripts commands.
ASSERT(exp
).
EXTERN(symbol symbol
...)
EXTERN, and you may use
EXTERNmultiple times. This command has the same effect as the ‘-u’ command-line option.
FORCE_COMMON_ALLOCATION
INHIBIT_COMMON_ALLOCATION
ldomit the assignment of addresses to common symbols even for a non-relocatable output file.
FORCE_GROUP_ALLOCATION
INSERT [ AFTER | BEFORE ]output_section
SECTIONSwith,might look:
SECTIONS { OVERLAY : { .ov1 { ov1*(.text) } .ov2 { ov2*(.text) } } } INSERT AFTER .
NOCROSSREFS_TO(tosection fromsection
...)
)
objdumpprogram with the ‘-f’ option.
LD_FEATURE(string
)
"SANE_EXPR"then absolute symbols and numbers in a script are simply treated as numbers everywhere. See Expression Section. ‘.’ ‘floating_point’ will be defined as zero. The symbol ‘_etext’ will be defined as the address following the last ‘.text’ input section. The symbol ‘_bdata’ will be defined as the address following the ‘.text’ output section aligned upward to a 4 byte boundary.
For ELF targeted ports, define a symbol that will be hidden and won't be
exported. The syntax is
HIDDEN(symbol
= expression
).
Here is the example from Simple Assignments, rewritten to use
HIDDEN:
HIDDEN(floating_point = 0); SECTIONS { .text : { *(.text) HIDDEN(_etext = .); } HIDDEN(_bdata = (. + 3) & ~ 3); .data : { *(.data) } }
In this case none of the three symbols will be visible outside this module..
Note - the
PROVIDE directive considers a common symbol to be
defined, even though such a symbol could be combined with the symbol
that the
PROVIDE would create. This is particularly important
when considering constructor and destructor list symbols such as
‘__CTOR_LIST__’ as these are often defined as common symbols.
Similar to
PROVIDE. For ELF targeted ports, the symbol will be
hidden and won't be exported. ‘name an entry called ‘foo’ in the symbol table. This entry holds the address of an ‘int’ ‘foo’ in the symbol table, gets the address associated with this symbol and then writes the value 1 into that address. Whereas:
int * a = & foo;
looks up the symbol ‘foo’ in the symbol table, gets its address and then copies this address into the block of memory associated with the variable ‘a’.
Linker scripts symbol declarations, by contrast, create an entry in the symbol table but do not assign any memory to them. Thus they are an address without a value. So for example the linker script definition:
foo = 1000;
creates an entry in the symbol table called ‘foo’); start_of_FLASH = .FLASH;
Then the C source code to perform the copy would be:
extern char start_of_ROM, end_of_ROM, start_of_FLASH; memcpy (& start_of_FLASH, & start_of_ROM, & end_of_ROM - & start_of_ROM);
Note the use of the ‘&’ operators. These are correct. Alternatively the symbols can be treated as the names of vectors or arrays and then the code will again work as expected:
extern char start_of_ROM[], end_of_ROM[], start_of_FLASH[]; memcpy (start_of_FLASH, start_of_ROM, end_of_ROM - start_of_ROM);
Note how using this method does not require the use of ‘&’ operators. comma at the end may be required if a fillexp is used and the next sections-command looks like a continuation of the expression. ‘.text’, ‘.data’ or ‘ ‘/DISCARD/’ is special; Output Section Discarding. ‘.text’ output section to the current value of the location counter. The second will set it to the current value of the location counter aligned to the strictest alignment of any of the ‘ ‘.text’ sections, you would write:
*(.text)
Here the ‘*’. The EXCLUDE_FILE can also be placed inside the section list, for example:
*(EXCLUDE_FILE (*crtend.o *otherfile.o) .ctors)
The result of this is identically to the previous example. Supporting two syntaxes for EXCLUDE_FILE is useful if the section list contains more than one section, as described below.
There are two ways to include more than one section:
*(.text .rdata) *(.text) *(.rdata)
The difference between these is the order in which the ‘.text’ and ‘.rdata’ input sections will appear in the output section. In the first example, they will be intermingled, appearing in the same order as they are found in the linker input. In the second example, all ‘.text’ input sections will appear first, followed by all ‘.rdata’ input sections.
When using EXCLUDE_FILE with more than one section, if the exclusion is within the section list then the exclusion only applies to the immediately following section, for example:
*(EXCLUDE_FILE (*somefile.o) .text .rdata)
will cause all ‘.text’ sections from all files except somefile.o to be included, while all ‘.rdata’ sections from all files, including somefile.o, will be included. To exclude the ‘.rdata’ sections from somefile.o the example could be modified to:
*(EXCLUDE_FILE (*somefile.o) .text EXCLUDE_FILE (*somefile.o) .rdata)
Alternatively, placing the EXCLUDE_FILE outside of the section list, before the input file selection, will cause the exclusion to apply for all sections. Thus the previous example can be rewritten as:
EXCLUDE_FILE (*somefile.o) *(.text .rdata)
You can specify a file name to include sections from a particular file. You would do this if one or more of your files contain special data that needs to be at a particular location in memory. For example:
data.o(.data)
To refine the sections that are included based on the section flags of an input section, INPUT_SECTION_FLAGS may be used.
Here is a simple example for using Section header flags for ELF sections:
SECTIONS { .text : { INPUT_SECTION_FLAGS (SHF_MERGE & SHF_STRINGS) *(.text) } .text2 : { INPUT_SECTION_FLAGS (!SHF_WRITE) *(.text) } }
In this example, the output section ‘.text’ will be comprised of any
input section matching the name *(.text) whose section header flags
SHF_MERGE and
SHF_STRINGS are set. The output section
‘.text2’ will be comprised of any input section matching the name *(.text)
whose section header flag
SHF_WRITE is clear.
You can also specify files within archives by writing a pattern matching the archive, a colon, then the pattern matching the file, with no whitespace around the colon.
Either one or both of ‘archive’ and ‘file’ can contain shell
wildcards. On DOS based file systems, the linker will assume that a
single letter followed by a colon is a drive specifier, so
‘c:myfile.o’ is a simple file specification, not ‘myfile.o’
within an archive called ‘c’. ‘archive:file’ filespecs may
also be used within an
EXCLUDE_FILE list, but may not appear in
other linker script contexts. For instance, you cannot extract a file
from an archive by using ‘archive ‘archive:file’ specifier
and ‘*’ seen in many examples is a simple wildcard pattern for the file name.
The wildcard patterns are like those used by the Unix shell.
When a file name is matched with a wildcard, the wildcard characters will not match a ‘/’ character (used to separate directory names on Unix). A pattern consisting of a single ‘*’ character is an exception; it will always match any file name, whether it contains a ‘/’ or not. In a section name, the wildcard characters will match a ‘/’ similar to
SORT_BY_NAME.
SORT_BY_ALIGNMENT will sort sections into descending order of
alignment before placing them in the output file. Placing larger
alignments before smaller alignments can reduce the amount of padding
needed.
SORT_BY_INIT_PRIORITY is also similar to
SORT_BY_NAME.
SORT_BY_INIT_PRIORITY will sort sections into ascending
numerical order of the GCC init_priority attribute encoded in the
section name before placing them in the output file. In
.init_array.NNNNN and
.fini_array.NNNNN,
NNNNN is
the init_priority. In
.ctors.NNNNN and
.dtors.NNNNN,
NNNNN is 65535 minus the init_priority. two sections have the same name.
SORT_BY_ALIGNMENT(
SORT_BY_NAME(wildcard section pattern)). It will sort the input sections by alignment first, then by name if two.
SORT_NONE disables section sorting by ignoring the command-line
section sorting option.
If you ever get confused about where input sections are going, use the ‘-M’ linker option to generate a map file. The map file shows precisely how input sections are mapped to output sections.
This example shows how wildcard patterns might be used to partition files. This linker script directs the linker to place all ‘.text’ sections in ‘.text’ and all ‘.bss’ sections in ‘.bss’. The linker will place the ‘.data’ section from all files beginning with an upper case character in ‘.DATA’; for all other files, the linker will place the ‘.data’ section in ‘ ‘COMMON’.
You may use file names with the ‘COMMON’ section just as with any other input sections. You can use this to place common symbols from a particular input file in one section while common symbols from other input files are placed in another section.
In most cases, common symbols in input files will be placed in the ‘ ‘COMMON’ for standard common symbols and ‘.scommon’ for small common symbols. This permits you to map the different types of common symbols into memory at different locations.
You will sometimes see ‘[COMMON]’ in old linker scripts. This notation is now considered obsolete. It is equivalent to ‘*(COMMON)’.
When link-time garbage collection is in use (‘- ‘outputa’ which starts at location ‘0x10000’. All of section ‘.input1’ from file foo.o follows immediately, in the same output section. All of section ‘.input2’ from foo.o goes into output section ‘outputb’, followed by section ‘.input1’ from foo1.o. All of the remaining ‘.input1’ and ‘.input2’ sections from any files are written to output section ‘outputc’.
SECTIONS { outputa 0x10000 : { all.o foo.o (.input1) } outputb : { foo.o (.input2) foo1.o (.input1) } outputc : { *(.input1) *(.input2) } }
If an output section's name is the same as the input section's name and is representable as a C identifier, then the linker will automatically see PROVIDE two symbols: __start_SECNAME and __stop_SECNAME, where SECNAME is the name of the section. These indicate the start address and end address of the output section respectively. Note: most section names are not representable as C identifiers because they contain a ‘.’ character. ‘addr’: ‘0x90’:
FILL(0x90909090)
The
FILL command is similar to the ‘ ‘SORT_BY_NAME(CONSTRUCTORS)’ instead. When using the
.ctors and
.dtors sections, use ‘*(SORT_BY_NAME(.ctors))’ and
‘*(SORT_BY_NAME(.dtors))’ instead of just ‘*(.ctors)’ and
‘*(.dtors)’.
Normally the compiler and linker will handle these issues automatically, and you will not need to concern yourself with them. However, you may need to consider this if you are using C++ and writing your own linker scripts..
This can be used to discard input sections marked with the ELF flag
SHF_GNU_RETAIN, which would otherwise have been saved from linker
garbage collection.
Note, sections that match the ‘/DISCARD/’ output section will be discarded even if they are in an ELF section group which has other members which are not being discarded. This is deliberate. Discarding takes precedence over grouping.
We showed above that the full description of an output section looked like this:
section [address] [(type)] : [AT(lma)] [ALIGN(section_align) | ALIGN_WITH_INPUT] [SUBALIGN(subsection_align)] [constraint] { ‘ROM’ section is addressed at memory location ‘0’ and does not need to be loaded when the program is run.
SECTIONS { ROM 0 (NOLOAD) : { ... } ... }
Every section has a virtual address (VMA) and a load address (LMA); see
Basic Script Concepts. The virtual address is specified by the
see Output Section Address described earlier. The load address is
specified by the
AT or
AT> keywords. Specifying a load
address is optional.
The
AT keyword takes an expression as an argument. This
specifies the exact load address of the section. The
AT> keyword
takes the name of a memory region as an argument. See MEMORY. The
load address of the section is set to the next free address in the
region, aligned to the section's alignment requirements.
If neither
AT nor
AT> is specified for an allocatable
section, the linker will use the following heuristic to determine the
load address:
This feature is designed to make it easy to build a ROM image. For
example, the following linker script creates three output sections: one
called ‘.text’, which starts at
0x1000, one called
‘.mdata’, which is loaded at the end of the ‘.text’ section
even though its VMA is
0x2000, and one called ‘ increase an output section's alignment by using ALIGN. As an alternative you can enforce that the difference between the VMA and LMA remains intact throughout this output section with the ALIGN_WITH_INPUT attribute.
You can force input section alignment within an output section by using SUBALIGN. The value specified overrides any alignment given by input sections, whether larger or smaller.
You can specify that an output section should only be created if all
of its input sections are read-only or all of its input sections are
read-write by using the keyword
ONLY_IF_RO and
ONLY_IF_RW respectively.
You can assign a section to a previously defined region of memory by using ‘>region’. See MEMORY.
Here is a simple example:
MEMORY { rom : ORIGIN = 0x1000, LENGTH = 0x1000 } SECTIONS { ROM : { *(.text) } >rom }
You can assign a section to a previously defined program segment by
using ‘
‘ ‘0x’ and without a trailing ‘k’ or ‘M’, construct (see SECTIONS),
except that no addresses and no memory regions may be defined for
sections within an
OVERLAY.
The comma at the end may be required if a fill is used and the next sections-command looks like a continuation of the expression. are
provides ‘.text0’ and ‘.text1’ to start at
address 0x1000. ‘.text0’ will be loaded at address 0x4000, and
‘.text1’ will be loaded immediately after ‘.text0’. The
following symbols will be defined if referenced:
_) } PROVIDE (__load_start_text0 = LOADADDR (.text0)); PROVIDE (__load_stop_text0 = LOADADDR (.text0) + SIZEOF (.text0)); .text1 0x1000 : AT (0x4000 + SIZEOF (.text0)) { o2/*.o(.text) } PROVIDE (__load_start_text1 = LOADADDR (.text1)); PROVIDE (_ many uses of the
MEMORY command,
however, all memory blocks defined are treated as if they were
specified inside a single
MEMORY command. The syntax for
MEMORY ‘. The headers are processed in order and it
is usual for them to map to sections in ascending load address order.
Certain program header types describe segments of memory which the system loader will load from the file. In the linker script, you specify the contents of these segments by placing allocatable output sections in the segments. You use the ‘:phdr’ output section attribute to place a section in a particular segment. See Output Section Phdr.
It is normal to put certain sections in more than one segment. This merely implies that one segment of memory contains another. You may repeat ‘:phdr’, using it once for each segment which should contain the section.
If you place a section in one or more segments using ‘:phdr’,
then the linker will place all subsequent allocatable sections which do
not specify ‘ after
the program header type to further describe the contents of the segment.
The
FILEHDR keyword means that the segment should include the ELF
file header. The
PHDRS keyword means that the segment should
include the ELF program headers themselves. If applied to a loadable
segment (
PT_LOAD), all prior loadable segments must have one of
these keywords.
The type may be one of the following. The numbers indicate the value of the keyword.
PT_NULL(0)
PT_LOAD(1)
PT_DYNAMIC(2)
PT_INTERP(3)
PT_NOTE(4)
PT_SHLIB(5)
PT_PHDR(6)
PT_TLS , ‘original_foo’ to be an alias for ‘foo’ bound to the version node ‘VERS_1.1’. The ‘local:’ directive can be used to prevent the symbol ‘original_foo’ from being exported. A ‘ ‘,’.
It is possible to refer to target-specific constants via the use of
the
CONSTANT(name
) operator, where name is one of:
MAXPAGESIZE
COMMONPAGESIZE
So for example:
.text ALIGN (CONSTANT (MAXPAGESIZE)) : { *(.text) }
will create a text section aligned to the largest page boundary supported by the target., ‘A-B’ is one symbol, whereas ‘A - B’ is an expression involving subtraction.
Orphan sections are sections present in the input files which are not explicitly placed into the output file by the linker script. The linker will still copy these sections into the output file by either finding, or creating a suitable output section in which to place the orphaned input section.
If the name of an orphaned input section exactly matches the name of an existing output section, then the orphaned input section will be placed at the end of that output section.
If there is no output section with a matching name then new output sections will be created. Each new output section will have the same name as the orphan section placed within it. If there are multiple orphan sections with the same name, these will all be combined into one new output section.
If new output sections are created to hold orphaned input sections, then the linker must decide where to place these new output sections in relation to existing output sections. On most modern targets, the linker attempts to place orphan sections after sections of the same attribute, such as code vs data, loadable vs non-loadable, etc. If no sections with matching attributes are found, or your target lacks this support, the orphan section is placed at the end of the file.
The command-line options ‘--orphan-handling’ and ‘--unique’ (see Command-line Options) can be used to control which output sections an orphan is placed in.
The special linker variable dot ‘.’ ‘.’, must be evaluated during section allocation.
If the result of an expression is required, but the value is not available, then an error results. For example, a script like the following
SECTIONS { .text 9+this_isnt_constant : { *(.text) } }
will cause the error message ‘non constant expression for initial address’.
Addresses and symbols may be section relative, or absolute. A section relative symbol is relocatable. If you request relocatable output using the ‘.. involving numbers, relative addresses and absolute addresses, ld follows these rules to evaluate terms:
The result section of each sub-expression is as follows:
LD_FEATURE ("SANE_EXPR")or inside an output section definition but an absolute address otherwise.
You can use the builtin function
ABSOLUTE to force an expression
to be absolute when it would otherwise be relative. For example, to
create an absolute symbol set to the address of the end of the output
section ‘.data’:
SECTIONS { .data : { *(.data) _edata = ABSOLUTE(.); } }
If ‘ABSOLUTE’ were not used, ‘_edata’ would be relative to the ‘.data’ section.
Using
LOADADDR also forces an expression absolute, since this
particular builtin function returns an absolute address.
The linker script language includes a number of builtin functions for use in linker script expressions.
ABSOLUTE(exp
)
ADDR(section
)
start_of_output_1,
symbol_1and
symbol_2are assigned equivalent values, except that
symbol_1will be relative to the
.output1section while the other two will be absolute:(ABSOLUTE(.).
ALIGNOF(section
)
.outputsection is stored as the first value in that section.
SECTIONS{ ... .output { LONG (ALIGNOF (.output)) ... } ... }
BLOCK(exp
)
ALIGN, for compatibility with older linker scripts. It is most often seen when setting the address of an output section.
DATA_SEGMENT_ALIGN(maxpagesize
,commonpagesize
)
(ALIGN(maxpagesize) + (. & (maxpagesize - 1)))
or
(ALIGN(maxpagesize) + ((. +
)
DATA_SEGMENT_ALIGNevaluation purposes.
. = DATA_SEGMENT_END(.);
DATA_SEGMENT_RELRO_END(offset
,exp
)
PT_GNU_RELROsegment when ‘-z relro’ option is used. When ‘-z relro’ option is not present,
DATA_SEGMENT_RELRO_ENDdoes nothing, otherwise
DATA_SEGMENT_ALIGNis padded so that exp + offset is aligned to the commonpagesize argument given to
DATA_SEGMENT_ALIGN. If present in the linker script, it must be placed between
DATA_SEGMENT_ALIGNand
DATA_SEGMENT_END. Evaluates to the second argument plus any padding needed at the end of the
PT_GNU_RELROsegment due to section alignment.
. = DATA_SEGMENT_RELRO_END(24, .);
DEFINED(symbol
)
SECTIONS { ... .text : { begin = DEFINED(begin) ? begin : . ; ... } ... }
LENGTH(memory
)
LOADADDR(section
)
LOG2CEIL(exp
)
LOG2CEIL(0)returns 0. ‘not.
The linker can use dynamically loaded plugins to modify its behavior. For example, the link-time optimization feature that some compilers support is implemented with a linker plugin.
Currently there is only one plugin shipped by default, but more may be added here later.
Originally, static libraries were contained in an archive file consisting just of a collection of relocatable object files. Later they evolved to optionally include a symbol table, to assist in finding the needed objects within a library. There their evolution ended, and dynamic libraries rose to ascendance.
One useful feature of dynamic libraries was that, more than just collecting multiple objects into a single file, they also included a list of their dependencies, such that one could specify just the name of a single dynamic library at link time, and all of its dependencies would be implicitly referenced as well. But static libraries lacked this feature, so if a link invocation was switched from using dynamic libraries to static libraries, the link command would usually fail unless it was rewritten to explicitly list the dependencies of the static library.
The GNU ar utility now supports a --record-libdeps option to embed dependency lists into static libraries as well, and the libdep plugin may be used to read this dependency information at link time. The dependency information is stored as a single string, carrying -l and -L arguments as they would normally appear in a linker command line. As such, the information can be written with any text utility and stored into any archive, even if GNU ar is not being used to create the archive. The information is stored in an archive member named ‘__.LIBDEP’.
For example, given a library libssl.a that depends on another library libcrypto.a which may be found in /usr/local/lib, the ‘__.LIBDEP’ member of libssl.a would contain
-L/usr/local/lib -lcrypto
ld has additional features on some platforms; the following sections describe them. Machines where ld has no additional functionality are not listed.
For the H8/300, ld can perform these global optimizations when you specify the ‘- ‘mov.b
@aa:16’ into ‘mov.b
@aa:8’ whenever the address aa is in the top page of memory).
ld finds all
mov instructions which use the register
indirect with 32-bit displacement addressing mode, but use a small
displacement inside 16-bit displacement range, and changes them to use
the 16-bit displacement form. (That is: the linker turns ‘mov.b
@d:32,ERx’ into ‘mov.b
@d:16,ERx’
whenever the displacement d is in the 16 bit signed integer
range. Only implemented in ELF-format ld). ‘bset #xx:3,
@aa:32’ into ‘bset #xx:3,
@aa:8’ whenever the address aa is in the top page of memory).
ldc.w, stc.winstructions which use the 32 bit absolute address form, but refer to the top page of memory, and changes them to use 16 bit address form. (That is: the linker turns ‘ldc.w
@aa:32,ccr’ into ‘ldc.w
@aa:16,ccr’ whenever the address aa is in the top page of memory).
For the Motorola 68HC11, ld can perform these global optimizations when you specify the ‘--.
The ‘- ‘--vfp-denorm-fix=scalar’ if you are using the VFP11 scalar mode only, or ‘--vfp-denorm-fix=vector’ if you are using vector mode (the latter also works for scalar code). The default is ‘-.
When generating a shared library, ld will by default generate import stubs suitable for use with a single sub-space application. The ‘--multi-subspace’ switch causes ld to generate export stubs, and different (larger) import stubs suitable for use with multiple sub-spaces.
Long branch stubs and import/export stubs are placed by ld in stub sections located between groups of input sections. ‘- ‘N’ chooses this scheme, ensuring that branches to stubs always use a negative offset. Two special values of ‘N’ are recognized, ‘1’ and ‘-1’. These both instruct ld to automatically size input section groups for the branch types detected, with the same behaviour regarding stub placement as other positive or negative values of ‘N’ respectively... ‘.MMIX.reg_contents’ section.
Contents in this section is assumed to correspond to that of global
registers, and symbols referring to it are translated to special symbols,
equal to registers. In a final link, the start address of the
‘ ‘-m [mpu type]’ will select an appropriate linker script for selected MPU type. (To get a list of known MPUs just pass ‘-m help’ option to the linker).
The linker will recognize some extra sections which are MSP430 specific:
‘.vectors
’
‘.bootloader
’
‘.infomem
’
‘.infomemnobits
’
‘.noinit
’
The last two sections are used by gcc.
--code-regionand
--data-regionoptions. This is useful if you are compiling and linking using a single call to the GCC wrapper, and want to compile the source files using -m[code,data]-region but not transform the sections for prebuilt libraries and objects. ‘-z relro -z now’. However, this placement means that
.sdatacannot always be used in shared libraries, because the PowerPC ABI accesses
.sdatain shared libraries from the GOT pointer. ‘--sdata-got’ forces the old GOT placement. PowerPC GCC doesn't use
.sdatain shared libraries, so this option is really only useful for other compilers that may do so..
__tls_get_addrfor a given symbol to be resolved by the special stub without calling in to glibc. By default the linker enables generation of the stub when glibc advertises the availability of __tls_get_addr_opt. Using --tls-get-addr-optimize with an older glibc won't do much besides slow down your applications, but may be useful if linking an application against an older glibc with the expectation that it will normally be used on systems having a newer glibc. --tls-get-addr-regsave forces generation of a stub that saves and restores volatile registers around the call into glibc. Normally, this is done when the linker detects a call to __tls_get_addr_desc. Such calls then go via the register saving stub to __tls_get_addr_opt. --no-tls-get-addr-regsave disables generation of the register saves.
.opdsection entries corresponding to deleted link-once functions, or functions removed by the action of ‘-.
R_PPC64_PLTSEQ,
R_PPC64_PLTCALL,
R_PPC64_PLT16_HAand
R_PPC64_PLT16_LO_DSrelocations by a number of
nops and a direct call when the function is defined locally and can't be overridden by some other definition. This option disables=. A negative value may be specified to pad PLT call stubs so that they do not cross the specified power of two boundary (or the minimum number of boundaries if a PLT stub is so large that it must cross a boundary). By default PLT call stubs are aligned to 32-byte boundaries..
@notocPLT calls where
r2is not known. The power10 notoc stubs are smaller and faster, so are preferred for power10. --power10-stubs and --no-power10-stubs allow you to override the linker's selection of stub instructions. --power10-stubs=auto allows the user to select the default auto mode.
_.
The ‘-:
gcc -o <output> <objectfiles> <dll name>.def
Using a DEF file turns off the normal auto-export behavior, unless the ‘--export-all-symbols’ option is also used.
Here is an example of a DEF file for a shared library called ‘xyz] [== <name3>] ) *
Declares<name3>’ is the to be used string in import/export table for the symbol. ‘- ‘- ‘--enable-auto-import’ and ‘automatic data imports’ for more information.
auto-import of variables does not always work flawlessly without additional assistance. Sometimes, you will see this message
"variable '<var>' can't be auto-imported. Please read the
documentation for ld's
--enable-auto-import for details."
The ‘- ‘-.
‘--enable-runtime-pseudo-relocs’ is not the default; it must be explicitly enabled as needed.
Linking directly to a dll uses no extra command-line switches other than ‘-L’ and ‘ ‘-lxxx’ it will attempt to find, in the first directory of its search path,
libxxx.dll.a xxx.dll.a libxxx.a xxx.lib libxxx.lib cygxxx.dll (*) libxxx.dll xxx.dll
before moving on to the next directory in the search path.
(*) Actually, this is not_foo = foo’ maps the symbol ‘foo’ to ‘-:
.xtensa.infosection of the first input object. A warning is issued if ABI tags of input objects do not match each other or the chosen output object ABI. ‘-u’, ‘-c’, or ‘ ‘ ‘*’ are comments.
You can write these commands using all upper-case letters, or all lower case; for example, ‘chip’ is the same as ‘CHIP’. S-records, if output-format is ‘S’
LISTanything
...
The keyword
LIST may be followed by anything on the
same line, with no change in its effect.
LOADfilename
LOADfilename
,filename
, ...filename
NAMEoutput-name
NAMEis equivalent to the command-line option ‘
--as-needed: Options
--auditAUDITLIB: Options
--auxiliary=name: Options
--bank-window: Options
--base-file: Options
--be8: ARM
--bss-plt: PowerPC ELF32
--build-id: Options
--build-id=style: Options
--check-sections: Options
--cmse-implib: ARM
--code-region: MSP430
--compress-debug-sections=none: Options
--compress-debug-sections=zlib: Options
--compress-debug-sections=zlib-gabi: Options
--compress-debug-sections=zlib-gnu: Options
--copy-dt-needed-entries: Options
--cref: Options
--ctf-share-types: Options
--ctf-variables: Options
--data-region: MSP430
--default-imported-symver: Options
--default-script=script: Options
--default-symver: Options
--defsym=symbol
=exp: Options
--demangle[=style
]: Options
--depauditAUDITLIB: Options
--dependency-file=depfile: Options
--disable-auto-image-base: Options
--disable-auto-import: Options
--disable-large-address-aware: Options
--disable-long-section-names: Options
--disable-multiple-abs-defs: Options
--disable-new-dtags: Options
--disable-runtime-pseudo-reloc: Options
--disable-sec-transformation: MSP430
--disable-stdcall-fixup: Options
--discard-all: Options
--discard-locals: Options
--dll: Options
--dll-search-prefix: Options
--dotsyms: PowerPC64 ELF64
--dsbt-index: Options
--dsbt-size: Options
--dynamic-linker=file: Options
--dynamic-list-cpp-new: Options
--dynamic-list-cpp-typeinfo: Options
--dynamic-list-data: Options
--dynamic-list=dynamic-list-file: Options
--dynamicbase: Options
--eh-frame-hdr: Options
--embedded-relocs: Options
--emit-relocs: Options
--emit-stack-syms: SPU ELF
--emit-stub-syms: SPU ELF
--emit-stub-syms: PowerPC64 ELF64
--emit-stub-syms: PowerPC ELF32
--enable-auto-image-base: Options
--enable-auto-import: Options
--enable-extra-pe-debug: Options
--enable-long-section-names: Options
--enable-new-dtags: Options
--enable-non-contiguous-regions: Options
--enable-non-contiguous-regions-warnings: Options
--enable-reloc-section: Options
--enable-runtime-pseudo-reloc: Options
--enable-stdcall-fixup: Options
--entry=entry: Options
--error-handling-script=scriptname: Options
--error-unresolved-symbols: Options
--exclude-all-symbols: Options
--exclude-libs: Options
--exclude-modules-for-implib: Options
--exclude-symbols: Options
--export-all-symbols: Options
--export-dynamic: Options
--export-dynamic-symbol-list=file: Options
--export-dynamic-symbol=glob: Options
--extra-overlay-stubs: SPU ELF
--fatal-warnings: Options
--file-alignment: Options
--filter=name: Options
--fix-arm1176: ARM
--fix-cortex-a53-835769: ARM
--fix-cortex-a8: ARM
--fix-stm32l4xx-629360: ARM
--fix-v4bx: ARM
--fix-v4bx-interworking: ARM
--force-dynamic: Options
--force-exe-suffix: Options
--force-group-allocation: Options
--forceinteg: Options
--format=format: Options
--format=version: TI COFF
--gc-keep-exported: Options
--gc-sections: Options
--got: Options
--got=type: M68K
--gpsize=value: Options
--hash-size=number: Options
--hash-style=style: Options
--heap: Options
--help: Options
--high-entropy-va: Options
--image-base: Options
--in-implib=file: ARM
--insert-timestamp: Options
--just-symbols=file: Options
--kill-at: Options
--large-address-aware: Options
--ld-generated-unwind-info: Options
--leading-underscore: Options
--library-path=dir: Options
--library=namespec: Options
--local-store=lo:hi: SPU ELF
--long-plt: ARM
--major-image-version: Options
--major-os-version: Options
--major-subsystem-version: Options
--merge-exidx-entries: ARM
--minor-image-version: Options
--minor-os-version: Options
--minor-subsystem-version: Options
--mri-script=MRI-cmdfile: Options
--multi-subspace: HPPA ELF32
--nmagic: Options
--no-accept-unknown-input-arch: Options
--no-add-needed: Options
--no-allow-shlib-undefined: Options
--no-apply-dynamic-relocs: ARM
--no-as-needed: Options
--no-bind: Options
--no-check-sections: Options
--no-copy-dt-needed-entries: Options
--no-ctf-variables: Options
--no-define-common: Options
--no-demangle: Options
--no-dotsyms: PowerPC64 ELF64
--no-dynamic-linker: Options
--no-eh-frame-hdr: Options
--no-enum-size-warning: ARM
--no-export-dynamic: Options
--no-fatal-warnings: Options
--no-fix-arm1176: ARM
--no-fix-cortex-a53-835769: ARM
--no-fix-cortex-a8: ARM
--no-gc-sections: Options
--no-inline-optimize: PowerPC64 ELF64
--no-isolation: Options
--no-keep-memory: Options
--no-leading-underscore: Options
--no-merge-exidx-entries: ARM
--no-merge-exidx-entries: Options
--no-multi-toc: PowerPC64 ELF64
--no-omagic: Options
--no-opd-optimize: PowerPC64 ELF64
--no-overlays: SPU ELF
--no-plt-align: PowerPC64 ELF64
--no-plt-localentry: PowerPC64 ELF64
--no-plt-static-chain: PowerPC64 ELF64
--no-plt-thread-safe: PowerPC64 ELF64
--no-power10-stubs: PowerPC64 ELF64
--no-print-gc-sections: Options
--no-print-map-discarded: Options
--no-save-restore-funcs: PowerPC64 ELF64
--no-seh: Options
--no-strip-discarded: Options
--no-tls-get-addr-optimize: PowerPC64 ELF64
--no-tls-get-addr-regsave: PowerPC64 ELF64
--no-tls-optimize: PowerPC64 ELF64
--no-tls-optimize: PowerPC ELF32
--no-toc-optimize: PowerPC64 ELF64
--no-toc-sort: PowerPC64 ELF64
--no-trampoline: Options
--no-undefined: Options
--no-undefined-version: Options
--no-warn-mismatch: Options
--no-warn-search-mismatch: Options
--no-wchar-size-warning: ARM
--no-whole-archive: Options
--noinhibit-exec: Options
--non-overlapping-opd: PowerPC64 ELF64
--nxcompat: Options
--oformat=output-format: Options
--omagic: Options
--orphan-handling=MODE: Options
--out-implib: Options
--output-def: Options
--output=output: Options
--pic-executable: Options
--pic-veneer: ARM
--plt-align: PowerPC64 ELF64
--plt-localentry: PowerPC64 ELF64
--plt-static-chain: PowerPC64 ELF64
--plt-thread-safe: PowerPC64 ELF64
--plugin: SPU ELF
--pop-state: Options
--power10-stubs: PowerPC64 ELF64
--print-gc-sections: Options
--print-map: Options
--print-map-discarded: Options
--print-memory-usage: Options
--print-output-format: Options
--push-state: Options
--reduce-memory-overheads: Options
--relax: Options
--relax on PowerPC: PowerPC ELF32
--relocatable: Options
--require-defined=symbol: Options
--retain-symbols-file=filename: Options
--save-restore-funcs: PowerPC64 ELF64
--script=script: Options
--sdata-got: PowerPC ELF32
--section-alignment: Options
--section-start=sectionname
=org: Options
--secure-plt: PowerPC ELF32
--sort-common: Options
--sort-section=alignment: Options
--sort-section=name: Options
--spare-dynamic-tags: Options
--split-by-file: Options
--split-by-reloc: Options
--stack: Options
--stack-analysis: SPU ELF
--stats: Options
--strip-all: Options
--strip-debug: Options
--strip-discarded: Options
--stub-group-size: PowerPC64 ELF64
--stub-group-size=N: HPPA ELF32
--stub-group-size=N: ARM
--subsystem: Options
--support-old-code: ARM
--sysroot=directory: Options
--target-help: Options
--target1-abs: ARM
--target1-rel: ARM
--target2=type: ARM
--task-link: Options
--thumb-entry=entry: ARM
--tls-get-addr-optimize: PowerPC64 ELF64
--tls-get-addr-regsave: PowerPC64 ELF64
--trace: Options
--trace-symbol=symbol: Options
--traditional-format: Options
--tsaware: Options
--undefined=symbol: Options
--unique[=SECTION
]: Options
--unresolved-symbols: Options
--use-blx: ARM
--use-nul-prefixed-import-tables: ARM
--verbose[=NUMBER
]: Options
--version: Options
--version-script=version-scriptfile: Options
--vfp11-denorm-fix: ARM
--warn-alternate-em: Options
--warn-common: Options
--warn-constructors: Options
--warn-multiple-gp: Options
--warn-once: Options
--warn-section-align: Options
--warn-textrel: Options
--warn-unresolved-symbols: Options
--wdmdriver: Options
--whole-archive: Options
--wrap=symbol: Options
-akeyword: Options
-assertkeyword: Options
-bformat: Options
-Bdynamic: Options
-Bgroup: Options
-Bshareable: Options
-Bstatic: Options
-Bsymbolic: Options
-Bsymbolic-functions: Options
-cMRI-cmdfile: Options
-call_shared: Options
-d: Options
-dc: Options
-dn: Options
-dp: Options
-dTscript: Options
-dy: Options
-E: Options
-eentry: Options
-EB: Options
-EL: Options
-Fname: Options
-fname: Options
-fini=name: Options
-g: Options
-Gvalue: Options
-hname: Options
-i: Options
-Ifile: Options
-init=name: Options
-Ldir: Options
-lnamespec: Options
-M: Options
-memulation: Options
-Map=mapfile: Options
-N: Options
-n: Options
-non_shared: Options
-nostdlib: Options
-Olevel: Options
-ooutput: Options
-PAUDITLIB: Options
-pie: Options
-pluginname: Options
-q: Options
-qmagic: Options
-Qy: Options
-r: Options
-Rfile: Options
-rpath-link=dir: Options
-rpath=dir: Options
-S: Options
-s: Options
-shared: Options
-soname=name: Options
-static: Options
-t: Options
-Tscript: Options
-Tbss=org: Options
-Tdata=org: Options
-Tldata-segment=org: Options
-Trodata-segment=org: Options
-Ttext-segment=org: Options
-Ttext=org: Options
-usymbol: Options
-Ur: Options
-V: Options
-v: Options
-X: Options
-x: Options
-Ypath: Options
-ysymbol: Options
-z defs: Options
-zkeyword: Options
-z muldefs: Options
-z und
ALIGN(section_align
): Forced Output Alignment
ALIGNOF(section
):
COMMONPAGESIZE: Symbolic Constants
CONSTANT: Symbolic Constants
FORCE_GROUP_ALLOCATION: Miscellaneous Commands
FORMAT(MRI): MRI
GNUTARGET: Environment
GROUP(files
): File Commands
INCLUDEfilename: File Commands
INHIBIT_COMMON_ALLOCATION: Miscellaneous Commands
INPUT(files
): File Commands
INSERT: Miscellaneous Commands
l =: MEMORY
LD_FEATURE(string
): Miscellaneous Commands
LDEMULATION: Environment
len =: MEMORY
LENGTH =: MEMORY
LENGTH(memory
): Builtin Functions
LIST(MRI): MRI
LOAD(MRI): MRI
LOADADDR(section
): Builtin Functions
LOG2CEIL(exp
): Builtin Functions
LONG(expression
): Output Section Data
MAX: Builtin Functions
MAXPAGESIZE: Symbolic Constants
MEMORY: MEMORY
MIN: Builtin Functions
NAME(MRI): MRI
NEXT(exp
): Builtin Functions
NOCROSSREFS(sections
): Miscellaneous Commands
NOCROSSREFS_TO(tosection fromsections
): Miscellaneous Commands
NOLOAD: Output Section Type
o =: MEMORY
objdump -i: BFD
ONLY_IF_RO: Output Section Constraint
ONLY_IF_RW: Output Section Constraint
REGION_ALIAS(alias
,region
): REGION_ALIAS | https://docs.adacore.com/live/wave/binutils-stable/html/ld/ld.html | 2021-11-27T06:11:49 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.adacore.com |
Getting Familiar With Operate
This section "Getting Familiar With Operate" and the next section “Incidents and Payloads” assumes that you’ve deployed a workflow to Zeebe and have created at least one workflow instance.
If you’re not sure how to deploy workflows or create instances, we recommend going through the "Getting Started tutorial".
In the following sections, we’ll use the same
order-process.bpmn workflow model from the Getting Started guide.
#View A Deployed Workflow
In the “Instances by Workflow” panel in your dashboard, you should see a list of your deployed workflows and running instances.
When you click on the name of a deployed workflow in the “Instances by Workflow” panel, you’ll navigate to a view of that workflow model along with all running instances.
From this “Running Instances” view, you have the ability to cancel a single running workflow instance.
#Inspect A Workflow Instance
Running workflow instances appear in the “Instances” section below the workflow model. To inspect a specific instance, you can click on the instance ID.
There, you’ll be able to see detail about the workflow instance, including the instance history and the variables attached to the instance.
| https://docs.camunda.io/docs/0.25/components/operate/userguide/basic-operate-navigation/ | 2021-11-27T06:16:55 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['/assets/images/Operate-Dashboard-Deployed-Workflow-b570c7231e4027ddd85f05e015269258.png',
'operate-dash-with-workflows'], dtype=object)
array(['/assets/images/Operate-View-Workflow-49f63b8f71a2203c49ce98750e3a5970.png',
'operate-view-workflow'], dtype=object)
array(['/assets/images/Operate-View-Workflow-Cancel-98303b00bd09494a651234cf334d157c.png',
'operate-cancel-workflow-instance'], dtype=object)
array(['/assets/images/Operate-Workflow-Instance-ID-abb4a7adf2f5bc25c384a47db943a235.png',
'operate-inspect-instance'], dtype=object)
array(['/assets/images/Operate-View-Instance-Detail-757f6384d8a775633ae41323b47cb2f7.png',
'operate-view-instance-detail'], dtype=object) ] | docs.camunda.io |
Android SDK Offerings. Haptik SDK Offerings:2. Haptik Image Loading Modules:3. Pro Tips
1. Haptik SDK Offerings:
Haptik offers a modular SDK. Clients can pick and choose the functionality that they want to integrate in their application.
- Haptik core module
- For enabling the basic functionality of Haptik.
- It includes the basic UI components for the SDK and the messaging screen to communicate with the bot.
- Haptik extensions module
- This module includes the Haptik inbox.
- This will enable the user to view the channels on the Haptik platform in a singular view.
- This module is optional, if added will by default add the core module.
2. Haptik Image Loading Modules:
Images play a very important role in making the UI attractive for the users.
Haptik SDK now supports two of the popular image loading libraries to load images.
- Haptik Glide Module
- Haptik SDK will use the Glide Image Loading library to load images.
- Haptik Picasso Module
- Haptik SDK will use the Picasso Image Loading library to load images.
NOTE It is mandatory to specify an Image Loading Library when Haptik SDK is being initialized. Not specifying an Image Loading Library will cause the app to crash at run time.
3. Pro Tips
- For the basic functionality of the messaging screen, only add the core module | https://docs.haptik.ai/android-sdk/android-sdk-offerings-legacy | 2021-11-27T06:12:40 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.haptik.ai |
Intelligent Order Management home page
Welcome to Dynamics 365 Intelligent Order Management. We're delighted to offer you a suite of capabilities to help your organization coordinate, standardize, and optimize orders captured in a mix of channels and systems through a single point of order orchestration. The standardization and optimization, together with incorporated intelligence, will optimize the speed of delivery while minimizing costs, resulting in improved customer satisfaction and higher gross margin to your organization.
The Intelligent Order Management overview video video (shown above) is included in the Finance and Operations playlist available on YouTube. | https://docs.microsoft.com/en-us/dynamics365/intelligent-order-management/ | 2021-11-27T07:00:35 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Use FetchXML to construct a query
Note
Unsure about entity vs. table? See Developers: Understand terminology in Microsoft Dataverse.
FetchXML is a proprietary XML based query language of Microsoft Dataverse used to query data using either the Web API or the Organization service. Table and as an organization-owned saved view in the SavedQuery Table.
Create the FetchXML query string
To execute a FetchXML. Set
visible on the
link-entity node to
false to hide the linked table in the Advanced Find user interface. It will still participate in the execution of the query and will return the appropriate results.
Warning
Don't retrieve all columns in a query because of the adverse effect on performance. This is particularly true if the query is used as a parameter to an update request. In an update, if all columns are included this sets all field values, even if they are unchanged, and often triggers cascaded updates to child records.
Example FetchXML query strings>
Important
A FetchXML query has a limit of a maximum of 10 allowed link tables.
The
in operator of a FetchXML query is limited to 2000 values.
Execute the FetchXML query
You can execute a FetchXML query by using either the Web API or the Organization service.
Using Web API
You can pass a URL encoded FetchXml string to the appropriate entityset using the
fetchXml query string parameter. More information: Use custom FetchXML.
Using Organization service
Use the IOrganizationService.RetrieveMultiple method passing an FetchExpression where the Query property contains the FetchXml query.
The following code shows how to execute a FetchXML query using the Organizations service:
//"]); }
Note
You can convert a FetchXML query to a query expression with the FetchXmlToQueryExpressionRequest message.
FetchXML query results
When you execute a FetchXML query by using the OrganizationServiceProxy.RetrieveMultiple(QueryBase) method, the return value is an EntityCollection that contains the results of the query. You can then iterate through the table collection. The previous example uses the
foreach loop to iterate through the result collection of the FetchXML query. | https://docs.microsoft.com/en-us/powerapps/developer/data-platform/use-fetchxml-construct-query?WT.mc_id=BA-MVP-5003861 | 2021-11-27T05:30:34 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
MTMfair is far more than a B2B Marketplace where sellers upload products that buyers can purchase.
Pricing is a very sensitive issue and we have taken this fully into account when developing MTMfair's StockOrder feature. This means that not everyone can simply see your prices!
Price lists
You have the option of entering a default price list for each of the 4 customer groups:
Distributors
Agents
Retailers
In addition to the 3 customer groups you can also add a recommended retail price (RRP).
Important: As RRPs may differ from country to country, please only enter the RRP for your country.
Just because you can enter different price lists, does not mean that you must enter them all. Perhaps you only work with one or two of the customer groups. Simply ignore the columns that do not apply to your sales channel.
Not only can you set a default price list per customer group, you can also set individual price lists per country. Price lists can be individualised not only in numbers, but can also be allocated respective currencies if required.
Price visibility
Important: You as the brand owner should be able to keep control over who may see your prices and who not.
While uploading your products, you have the option to decide which customer group in which country is eligible to discover which product.
Example 1: You are wanting to build a sales in channel in Australia and have selected to make your brand visible to agents there. Now, the first agent to discover your brand is not necessarily the best fit and shouldn't have access to sensitive information before even speaking to you.
By going to
Sales settings in the column
Agents select the
Application required option to ensure that potential agents will first have to request a price list or contact you directly before being able see any of your prices. Once you have decided that a particular agent is the right fit, you can, by the click of a button, make all of your brand's pricing visible to this agent.
Example 2: In another country you sell to retail directly without going through distributors or agents. Perhaps you don't really mind which retailer purchases your products for resale and don't grant exclusivity. Go to
Sales settings and in the column
Retailers select the option
will be accepted to ensure that any logged in retailer will be able to see the wholesale price valid to him or her. Any retailer will be able to place an order.
Minimum order value or quantity
Most manufactures require a minimum order value when selling to retailers directly. However, in some cases manufacturers may also require a minimum order quantity.
At MTMfair you have the option of choosing which customer group is expected to purchase a minimum quantity and which a minimum value.
By going to
Sales settings simply choose whether the respective customer group is required to purchase
Minimum quantity and a
Minimum value. | https://docs.mtmfair.com/article/12/sales-settings-price-lists | 2021-11-27T04:57:05 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://cdn.helpspace.com/1b89e2ed53c34e24870ddeeda839bd4a/media/16/conversions/MTMfair-price-list-default-medium.jpg',
'MTMfair-price-list-default.jpg'], dtype=object)
array(['https://cdn.helpspace.com/1b89e2ed53c34e24870ddeeda839bd4a/media/17/conversions/MTMfair-price-list-per-country-medium.jpg',
'MTMfair-price-list-per-country.jpg'], dtype=object)
array(['https://cdn.helpspace.com/1b89e2ed53c34e24870ddeeda839bd4a/media/14/conversions/MTMfair-application-required-medium.png',
'MTMfair-application-required.png'], dtype=object)
array(['https://cdn.helpspace.com/1b89e2ed53c34e24870ddeeda839bd4a/media/15/conversions/MTMfair-minimum-order-quantity-medium.png',
'MTMfair-minimum-order-quantity.png'], dtype=object) ] | docs.mtmfair.com |
Database Connectors
SQL Connectors
Via connection objects and the Operators Read Database, Write Database, and Update Database, RapidMiner Studio supports all relational database systems offering a fully compliant JDBC driver, including:
In-Database Processing
Performing various data transformations directly in a relational database system instead of transferring the entire dataset and doing those transformations locally can have multiple benefits:
- reduced data transfer
- leveraging the computing power and efficiency of the relational database system
- ensuring the original data does not leave the relational database
The In-Database Processing extension enables Studio users to perform common data preparation and ETL tasks directly in the relational database. To do this, users can chain operators as they normally would in Studio, and the extension translates this into a complex SQL query that is then executed in the relational database system.
Currently it supports the following products:
- MySQL
- PostgreSQL
- Microsoft SQL Server
- Oracle
- Google BigQuery
We are continuously adding support to other database vendors based on popular demand.
To start using In-Database Processing, you need to install the In-Database Processing Extension:
Install In-Database Processing Extension in Studio
View in the RapidMiner Marketplace
How to install extensions
NoSQL Connectors
Cassandra – connecting to and integrating your Cassandra account with RapidMiner Studio
MongoDB – connecting to and integrating your MongoDB account with RapidMiner Studio
Solr – connecting to and searching your Solr server with RapidMiner Studio
Splunk – connecting to and searching your Splunk server with RapidMiner Studio
Installing the NoSQL Extension
To connect to Cassandra or MongoDB, you need to install the NoSQL Extension:
Installing the Solr Extension
To connect to Solr, you need to install the Solr Extension:
Installing the Splunk Extension
To connect to Splunk, you need to install the Splunk Extension: | https://docs.rapidminer.com/latest/studio/connect/database/ | 2021-11-27T06:12:01 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rapidminer.com |
- Architecture
- Running
orca
- Arguments common to multiple commands
orcacommands
- Note on symmetric activation state for upgrades
- Note on Rhino 2.6.x export
- Troubleshooting
Architecture
orca is a command-line tool used to distribute upgrade/patching tasks to multiple hosts,
such as all hosts in a Rhino cluster,
and it implements the cluster migration workflow.
It consists of:
the
orcascript, which is run from a Linux management host (for example a Linux workstation or one of the cluster’s REM nodes)
several sub-scripts and tools which are invoked by orca (never invoked directly by the user).
common modules
workflow modules
slee-data-migration
slee-data-transformation
slee-patch-runner
orca requires ssh connections with the remote hosts and will copy the required modules and tools to the hosts,
triggers the command from the management host and the transferred modules will run in the remote hosts.
The control is done via
ssh connection and
orca will store logs in the management host.
The picture below shows how
orca interacts with the tools.
Upon a valid command
orca will transfer the required module to the remote host.
Some commands include packages that will be used in the remote host, like a minor upgrade package or patch package.
The packages are transferred automatically, and then kept in the
install path in the remote host.
When the command is executing, the module running in the remote host will send messages back to the main process in the management host.
All messages are persisted in the log files in the management host for debug purposes.
Some messages are shown in the console for the user, others are used internally by
orca.
Running
orca
The command-line syntax for invoking
orca is
orca --hosts <host1,host2,…> [--skip-verify-connections] [--remote-home-dir DIR] <command> [args…]
where
host1,host2,…is a comma-separated list of all hosts on which the
<command>is to be performed. This will normally be all hosts in the cluster, but sometimes you may want to use a selected host or set of hosts. For example, you may want to do some operation in batches when the cluster is large. Some commands, such as
upgrade-rem, can operate on different types of hosts, which may not be running Rhino.
the
--skip-verify-connectionsoption, or
-kfor short, instructs
orcanot to test connectivity to the hosts before running a command. The default is to test connectivity with every command, which reduces the risk of a command being only partially completed due to a network issue.
the
--remote-home-diror
-roption is used to optionally specify a home path other than
/home/sentinelon the remote Rhino hosts. It should not be given when operating on other types of host, such as REM nodes.
commandis the command to run: one of
status,
prepare,
prepare-new-rhino,
migrate,
rollback,
cleanup,
apply-patch,
revert-patch,
minor-upgrade,
major-upgrade,
upgrade-rem,
standardize-paths,
import-feature-scripts,
rhino-only-upgradeor
run.
argsis one or more command-specific arguments.
For further information on
command and
args see the documentation for each command below.
orca writes output to the terminal and also to log files on the hosts.
Once the
command is complete,
orca may also copy additional log files off the hosts and store them
locally under the log directory.
Running a
migrate,
rollback,
apply-patch or
revert-patch command will typically take between 5 to 10 minutes per node, depending on the timeouts set to allow time for active calls to drain.
At the end of many commands,
orca will list which commands can be run given the current node/cluster status, for example:
Available actions: - prepare - cleanup --clusters - rollback
Arguments common to multiple commands
Some arguments are common to multiple
orca commands. They must always be specified after the command.
SLEE drain timeout
Rhino will not shut down until all activities (traffic) on the node have concluded. In order to achieve this, the operator must redirect traffic away be redirected away from the node before starting a procedure such as a migration or upgrade.
The argument
--stop-timeout N controls how long
orca will wait for active calls to drain and for the SLEE to stop gracefully.
The value is specified in seconds, or 0 for no timeout (wait forever). If calls have not drained in the specified time,
and thus the SLEE has not stopped, the node will be forcibly killed.
This option applies to the following commands:
migrate
rollback
apply-patch
revert-patch
minor-upgrade
major-upgrade
rhino-only-upgrade
It is optional, with a default value of 120 seconds.
Multi-stage operations
The
--pause,
--continue and
--no-pause options control behaviour during long operations which apply to
multiple hosts, where the user may want the operation to pause after the first host to run validation tests.
They apply to the following commands:
apply-patch
revert-patch
minor-upgrade
major-upgrade
rhino-only-upgrade
--pause is used to stop
orca after the first host,
--continue is used when re-running the command for
the remaining hosts, and
--no-pause means that
orca will patch/upgrade all hosts in one go. These options are
mutually exclusive and exactly one must be specified when using the above-listed commands.
REM installation information
The
--backup-diroption informs
orcawhere it can locate, or should store, REM backups created during upgrades.
The
--remote-tomcat-homeoption informs
orcawhere the REM Tomcat installation is located on the host (though
orcawill try to autodetect this based on environment variables, running processes and searching the home directory). It corresponds to the
CATALINA_HOMEenvironment variable.
In complex Tomcat setups you may also need to specify the
--remote-tomcat-baseoption which corresponds to the
CATALINA_BASEenvironment variable.
These options apply to the following commands:
status, when used on a host with REM installed
upgrade-rem
rollback-rem
cleanup-rem
All these options are optional and any number of them can be specified at once. When exactly one of
--remote-tomcat-home
and
--remote-tomcat-base is specified, the other defaults to the same value.
orca commands
Be sure you are familiar with the concepts and patching process overview described in Cluster migration workflow.
status
The
status command prints the status of the nodes on the specified hosts:
which cluster directories are present
which cluster directory is live
the SLEE state of the live node
a list of export (backup) directories present on the node, if any.
It will also output some global status information, which at present consists of a list of nodes that use per-node service activation state.
The
status command will display much of its information even if Rhino is not running, though there may be
additional information it can include when it can contact a running Rhino.
standardize-paths
The
standardize-paths command renames and reconfigures an existing Rhino installation so it conforms to the standard
path structure described here.
This command requires three arguments:
--product <product>, where
<product>is the name of the product installed in the Rhino installation. Specify the name as a single word in lowercase, e.g.
volteor
ipsmgw.
--version <version>, where
<version>is the version of the product installed in the Rhino installation. Specify the version as a set of numbers separated by dots, e.g.
2.7.0.10.
--sourcedir <sourcedir>, where
<sourcedir>is the path to the existing Rhino installation, relative to the
rhinouser’s home directory.
Note that the
standardize-paths command can only perform filesystem-level manipulation.
In particular, it cannot
change the user under whose home directory Rhino is installed (this should be the
rhinouser)
change the name of the database (this should be in the form
rhino_<cluster ID>)
change any init.d, systemctl or similar scripts that start Rhino automatically on boot, because editing these requires root privileges.
prepare
The
prepare command prepares for a migration by creating one node in the uplevel cluster.
It can take three arguments:
The
--copy-dbargument will copy the management database of the live cluster to the new cluster. This means that the new cluster will contain the same deployments as the live cluster.
The
--init-dbargument will initialize an empty management database in the new cluster. This will allow a different product version to be installed in the new cluster.
The
-n/
--new-versionargument takes a string that will be used as the version in the name of the new cluster, e.g. "2.7.0.6".
Note that the
--copy-db and
--init-db arguments cannot be used together.
The new cluster will have the next sequential ID to the current live cluster. For example, if the current live cluster has cluster ID 100, the new one will have cluster ID 101.
The configuration of the new cluster is adjusted automatically; there is no need to manually change any configuration files.
When preparing the first set of nodes in the uplevel cluster, use the
--copy-dbor
--init-dboption which will cause
orcato also prepare the new cluster’s database.
The current cluster’s replication configuration (used for Rhino stores in Cassandra) is persisted to the new cluster.
prepare-new-rhino
The
prepare-new-rhino command does the same action as the
prepare command with the exception that it will clone
the current cluster to a new cluster with a new specified Rhino. The replication configuration of the current cluster is still persisted to the new cluster.
The arguments are:
The
-n/
--new-versionargument takes a string that will be used as the version in the name of the new cluster, e.g. "2.7.0.6". The version number here is for the Sentinel product and not the Rhino version.
The
-r/
--rhino-packageargument takes the Rhino install package to use, e.g. "rhino-install-2.6.1.2.tar"
The
-o/
--installer-overridesargument takes a properties files used by the rhino-install.sh script.
migrate
The
migrate command runs a migration from a downlevel to an uplevel cluster.
You must have first prepared the uplevel cluster using the
prepare command.
To perform the migration,
orca will
if the
--special-first-hostoption was passed to the command, export the current live node configuration (as a backup)
stop the live node, and wait to ensure all sockets are closed cleanly
edit the
rhinosymlink to point at the uplevel cluster
start Rhino in the uplevel cluster (if the
--special-first-hostoption is used, then this Rhino node will be explicitly made into a primary node — use this if and only if the uplevel cluster is empty or has no nodes currently running)
rollback
The
rollback command will move a node back from the uplevel cluster to the downlevel cluster - the reverse of
migrate.
If there is more than one old cluster, the most recent (highest ID number) is assumed to be the target downlevel cluster.
The process is:
stop the live node, and wait to ensure all sockets are closed cleanly
edit the
rhinosymlink to point at the downlevel cluster
start Rhino in the downlevel cluster (if the
--special-first-hostoption is used, then this Rhino node will be explicitly made into a primary node - use this if and only if the downlevel cluster is empty or has no nodes currently running).
The usage of
--special-first-host or
-f can change depending on the cluster states.
If there are 2 clusters active (50 and 51 for example) and the user want to rollback a node from cluster 51, don’t use the
--special-first-host option.
The reason is that the cluster 50 already has a primary member.
If the only active cluster is the cluster 51 and the user wants to rollback one node or all the nodes, then include the
--special-first-host option, because the cluster 50
is inactive and the first node migrated has to be set as part of the primary group before other nodes join the cluster.
cleanup
The
cleanup command will delete a set of specified cluster or export (backup) directories on the specified nodes.
It takes the following arguments:
--clusters <id1,id2,…>, where
id1,id2,…is a comma-separated list of cluster IDs.
--clusters-to-keep n, which will delete all but the
nmost recent clusters, including the live cluster. This only considers old clusters for deletion, and not prepared clusters.
--exports <id1,id2,…>, where
id1,id2,…is a comma-separated list of cluster IDs.
--exports-to-keep n, which will delete all but the
nmost recent exports.
For example,
cleanup --clusters 102 --exports 103,104 will delete the cluster directory numbered 102 and the export directories corresponding to clusters 103 and 104.
Also,
cleanup --clusters-to-keep 2 --exports-to-keep 2 will delete all but the current live and previous cluster, and all but the 2 most recent exports. It won’t delete any prepared clusters.
This command will reject any attempt to remove the live cluster directory.
Only one of each type of option, cluster or export, can be specified.
In general this command should only be used to delete cluster directories:
once any patching or upgrading is fully completed, and the uplevel version has passed all acceptance tests and been accepted as live configuration
if it is determined that the patch or upgrade is faulty, and rollback to the downlevel version has been fully completed on all nodes.
When cleaning up a cluster directory, the corresponding cluster database is also deleted.
If the node is a VM created with
tasvmbuild, where the Rhino logs are located in a dedicated directory
per cluster, then the log directory for this cluster is deleted along with all its contents. In the case
of a failed upgrade, before deleting the uplevel cluster to try the upgrade again, please make sure
you have saved off a copy of any relevant logs required for troubleshooting.
apply-patch
The
apply-patch command will perform a
prepare and
migrate to apply a given patch to the specified nodes in a single step.
It requires one argument:
<file>, where
<file> is the path (full or relative path) to the patch .zip file.
Specifically, the
apply-patch command does the following:
on the first host in the specified list of hosts, prepare the uplevel cluster and migrate to it using the same processes as in the
prepareand
migratecommands
on the other hosts, prepare the uplevel cluster
copy the patch to the first host
apply the patch to the first host (using the
apply-patch.shscript provided in the patch .zip file)
once the first host has successfully recovered, migrate the other hosts. They will pick up the patch automatically as a consequence of being in the same new Rhino cluster.
revert-patch
The
revert-patch command operates identically to the
apply-patch command, but reverts the patch instead of applying it.
Like
apply-patch, it takes a single
<file> argument.
Since
apply-patch does a migration, it would be most common to revert a patch using the
rollback command.
However a rollback will lose any configuration changes made after applying the patch, whereas this command performs a second
migration and hence preserves configuration changes.
minor-upgrade
The
minor
To run a custom package during the installation, specify either or both in the
packages.cfg:
post_install_packageto specify a custom package to be applied after the new software is installed but before the configuration is restored
post_configure_packageto specify the custom package to be applied after all configuration and data migration is done on the first node, but before the other nodes are migrated
The workflow is as follows:
validate the specified install.properties, and check it is not empty
if post install package is present, copy the custom package to the first node and run the
installexecutable from that package
recreate profile tables in the new cluster as needed, to match the set of profile tables from the previous cluster
restore the rhino configuration from the customer export (access, logging, SNMP, object pools, etc)
restore the RA configuration
restore the profiles from the customer export (maintain the current customer configuration)
if post configure package is present, copy the post-configuration custom package to the first node and run the
installexecutable from that package
migrate other nodes as explained in Cluster migration workflow
A command example is:
./orca --hosts host1,host2,host3 minor-upgrade packages $HOME/install/install.properties
major-upgrade
The
major
optional
--skip-new-rhino, indicates to not install a new Rhino present as part of the upgrade package
optional
--installer-overrides, takes a properties file used by the rhino-install.sh script
optional
--license, a license file to install with the new Rhino. If no license is specified,
orcawill check for a license in the upgrade package and no license is present it will check if the current installed license is supported for the new Rhino version.
To run a custom package during the installation, specify either or both in the packages.cfg:
post_install_packageto specify a custom package to be applied after the new software is installed but before the configuration is restored
post_configure_packageto specify a custom package to be applied after all configuration and data migration is done on the first node, but before the other nodes are migrated
The workflow is as follows:
validate the specified install.properties, and check it is not empty
check the existing packages defined in
packages.cfgexist: sdk, rhino, java, post install, post configure
generate an export from the new version
apply data transformation rules on the downlevel export
recreate profile tables in the new cluster as needed, to match the set of profile tables from the previous cluster
if post install package is present, copy the custom package to the first node and run the
installexecutable from that package
restore the RA configuration after transformation
restore the profiles from the transformed export (maintain the current customer configuration)
restore the rhino configuration from the customer export (access, logging, SNMP, object pools, etc)
if post configure package is present, copy the post-configuration custom package to the first node and run the
installexecutable from that package
do a 3 way merge for Feature Scripts
manual import the Feature Scripts after checking they are correct; see Feature Scripts conflicts and resolution
migrate other nodes as explained in Cluster migration workflow
A command example is:
./orca --hosts host1,host2,host3 major-upgrade packages $HOME/install/install.properties
upgrade-rem
The
upgrade-rem command upgrades Rhino Element Manager hosts to new versions.
Like the other commands, it takes a list of hosts which the upgrade should be performed on, but these hosts are likely to be specific to Rhino Element Manager, and not actually running Rhino itself.
The command can be used to update both the main REM package, and also plugin modules.
A command example is:
./orca --hosts remhost1,remhost2 upgrade-rem packages
The information of which plugins to upgrade are present in the
packages.cfg file.
As part of this command
orca generates a backup. This is stored in the backup directory, which (if not overridden
by the
--backup-dir option) defaults to
~/rem-backup on the REM host. The backup takes the form of a directory named
<timestamp>#<number>, where
<timestamp> is the time the backup was created in the form
YYYYMMDD-HHMMSS,
and
<number> is a unique integer (starting at 1). For example:
20180901-114400#2
indicates a backup created at 11:44am on 1st September 2018, labelled as backup number 2. The backup number can be used
to refer to the backup in the
rollback-rem and
cleanup-rem commands described below.
The backup contains a copy of:
the
rem_homedirectory (which contains the plugins)
the
rem.warweb application archive
the
rem-rmi.jarfile
rollback-rem
The
rollback-rem command reverts a REM installation to a previous backup, by stopping Tomcat, copying the files in the
backup into place, and restarting Tomcat. Its syntax is
./orca --hosts remhost1 rollback-rem [--target N]
The
--target parameter is optional and specifies the number of the backup (that appears in the backup’s name after the
# symbol) to roll back to. If not specified,
orca defaults to the highest-numbered backup, which is likely to be
the most recent one.
Because REM is not clustered in the same way as Rhino nodes are, installations and backups may differ between REM nodes.
As such, to avoid unexpected results, the
--target parameter can only be used if exactly one host is specified.
cleanup-rem
The
cleanup-rem command deletes unwanted REM backup directories from a host. Its syntax is
./orca --hosts remhost1 cleanup-rem --backups N[,N,N...]
There is one mandatory parameter,
--backups, which specifies the backup(s) to clean up by their number(s)
(that appear in the backups' names after the
# symbol), comma-separated without any spaces.
Because REM is not clustered in the same way as Rhino nodes are, installations and backups may differ between REM nodes.
As such, to avoid unexpected results, the
cleanup-rem command can only be used on one host at a time.
run
The
run command allows the user to run a custom command on each host. It takes one mandatory argument
<command>, where
<command> is the command to run on each host (if it contains spaces or other characters
that may be interpreted by the shell, be sure to quote it).
It also takes one optional argument
--source <file>, where
<file> is a script or file to upload before executing the command.
(Files specified in this way are uploaded to a temporary directory, and so the command may need to move it to its correct location.)
For example:
./orca --hosts VM1,VM2 run "ls -lrt /home/rhino" ./orca --hosts VM1,VM2 run "mv file.zip /home/rhino; cd /home/rhino; unzip file.zip" --source /path/to/file.zip
Note on symmetric activation state for upgrades
Symmetric activation state mode must be enabled before starting a major or minor upgrade. This means that all services will be forced to have the same state across all nodes in the cluster. This ensures that all services start correctly on all nodes after the upgrade process completes. If your cluster normally operates with symmetric activation state mode disabled, it will need to be manually disabled after the upgrade and related maintenance operations are complete.
See Activation State in the Rhino documentation.
Note on Rhino 2.6.x export
The Rhino export configuration from Rhino 2.5.x to Rhino 2.6.x changed.
More specifically, Rhino 2.6.x exports now include data for each SLEE namespace and include SAS bundle configuration.
orca just restores configuration for the default namespace.
Sentinel products are expected to have just the default namespace; if there is more than one namespace
orca will raise an error.
orca does not restore the SAS configuration when doing minor upgrade.
The new product version being installed as part of the minor upgrade includes all the necessary SAS bundles.
For Rhino documentation about namespaces see Namespaces.
For Rhino SAS configuration see MetaView Service Assurance Server (SAS) Tracing
Troubleshooting
See Troubleshooting. | https://docs.rhino.metaswitch.com/ocdoc/books/operational-tools/1.2.0/operational-tools/architecture/orca.html | 2021-11-27T05:07:21 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../images/architecture/orca-architecture.png',
'orca architecture'], dtype=object) ] | docs.rhino.metaswitch.com |
新增内容
虚幻引擎4.19版可帮助你 潜心创意开发——各种工具皆清晰明了,以便,你将成为名符其实的导演,进一步实时掌控。
由Theia提,,
,)。,。。,。,)
,,。,。,你才能在不同面部上共享面部曲线。产(Asset) 菜单 - 移除所有骨骼轨迹(Remove All Bone Tracks) 选项。
如果,。,,, - ,,。,。。。,
Bug Fix: Changed the order of the Behavior Tree tick to allow Blueprint-based nodes to update before processing search data.
Bug Fix: Fixed a bug that caused a Behavior Tree asset containing a Run Dynamic Behavior node to become dirty upon opening.
Bug Fix: Fixed a Behavior Tree state change during a latent abort. Decorators and services updates are delayed until after the aborting node finishes.
New: Added the local decorator scope to the Behavior Tree Composite Nodes. The local decorator scope is controlled by the Apply Decorator Scopevariable. This addition provides better control of flow-observing decorators, allowing them to be active only in a specified branch of the tree.
New: Added a simple mechanism for logging Behavior Treenodes that reference empty or removed Blackboard keys.
Debugging Tools
New: Extended the EQS Testing Pawn with an option to specify navigation agent properties to support navigation-related EQS tests and features.
New: Added more Visual Logging functions to Blueprint.
Bug Fix: Fixed some existing Visual Logging Blueprint functions.
Navigation
Bug Fix: Fixed cases where changes to Nav Mesh Bounds Volumes properties were not causing Nav Mesh to rebuild.
Bug Fix: Fixed a failure to set the area flags when updating the Nav Area of a given Nav Mesh poly.
New: Made updates to allow the proper use of Nav Link Component via Blueprints.
Animation
Crash Fix: Fixed a crash when opening Animation Compression settings for newly imported Alembic File.
Crash Fix: Applied a fix for a rare crash when Notify Queue caches a pointer to the notify that if garbage collected, would have a pointer to invalid memory. We now cache a pointer to the owner of the notify memory so we can track its validity.
Crash Fix: Applied a fix for a Live Link module startup crash.
Crash Fix: Prevented threaded work from running in a single-threaded environment, which avoids a Linux server crash.
Bug Fix: Fixed debug builds hitting checkSlow when using a post process Anim Instance on a Skeletal Mesh Component without a main AnimInstance.
Bug Fix: Fixed some race conditions in Skeletal Mesh Resource Sync UV Channel Data and Static Mesh Render Data Sync UV Channel Data.
Bug Fix: Fixed a bug with adjacency DDC serialization in Skeletal Meshes.
Bug Fix: Non-rendered anim updates now run on worker threads.
Bug Fix: Changed the way Get Bone Poses For Time inside of the Animation Modifiers library retrieves the Bone Transforms for a specific time-code.
Bug Fix: Fixed Anim Sequence Base Get Time At Frame and Anim Sequence Base Get Frame At Time to return correct frame indices.
Bug Fix: Fixed pasted socket name getting an appended _0 when no other sockets contained that name.
Bug Fix: Reset Tick Records when doing 'Only Tick Montages When Not Rendered' since the montage will appear to have jumped when regular ticking resumes.
Bug Fix: URO now ensures you don't skip more frames than desired when switching LODs.
Bug Fix: Animation tasks are no longer created for Skeletal Meshes that have no anim instance.
Bug Fix: Skeletal Meshes that are not being ticked due to URO can no longer dispatch tick tasks.
Bug Fix: Fix for notifies not getting fired in cases where Always Tick Pose was set on Skeletal Mesh Components.
Bug Fix: Looping sound cues and particle systems used in notifies no longer leak a component per invocation.
Bug Fix: Recalc Required Bones now uses correct predicted LOD level instead of defaulting to 0.
New: Optimized Post Anim Evaluation when URO is skipping a frame with no interpolation.
New: Changed the notify system to allow notifies on follower animations in a sync group to be triggered.
New: Added two callbacks on Character Movement Component for the root motion world space conversion to allow users to modify the root motion:
Pre conversion (while it is still in component space).
Post conversion (when it is in world space).
New: Added Virtual Subjects to Live Link which are created within the client and contain the bones of multiple real subjects.
New: Added Initialization and Update Delta Time to Live Link Retargeter API:
Pushing skeleton to Live Link can now take source GUID.
Message bus source pushes GUID when sending skeleton.
New: Added 'Play Rate Basis' to Sequence Player node to scale Play Rate without having to do more expensive BP code.
New: Added support for Dialogue Waves driving Audio Curve Source Components.
New: Added default Get Axis Vector static function to Axis Option.
New: Skeletal Mesh Component Init Anim doesn't call Update and Eval anymore.
New: Rigid Body Anim Node Updates:
Now maintains Bone Velocity transfers through LOD changes.
Added support for transferring angular velocity.
Added option Transfer Bone Velocities to transfer bone velocities to simulation upon start, so ragdolls transition seamlessly from kinematic (animation) to simulation.
Added Freeze Incoming Pose On Start option to freeze incoming pose and stop ticking/evaluating rest of the AnimGraph.
New: Moved Update Component Pose Any Thread and Evaluate Component Pose Any Thread functions to Anim Node Skeletal Control Base to enable overriding these in child classes.
New: Added Pre Eval Skel Control Any Thread to Skel Control Base, to allow capture of incoming pose before SkelControl is evaluated.
New: Added the Empty function to Base Compact Pose and FCS Pose to release allocated arrays.
New: Added the notify class name to the notify tooltip.
New: Optimized redundant key removal on curves. Optimized code takes less than 1% of the time to execute of the original.
New: Added prebuilt binaries for the Maya Live Link plugin to Engine/Extras (Maya 2016, Maya 2017, Maya 2018).
New: Skeletal Mesh Render Data Get Max Bones Per Section is now an ENGINE_API exported function.
Animation Assets
Crash Fix: Fixed a crash with streaming if a montage is streamed out while an event is queued.
Crash Fix: A crash no longer occurs due to oversampling animation during compression.
Crash Fix: No longer crashes when the Skeleton has been changed and joints have been removed from a Pose Asset.
Bug Fix: Fixed an ensure that was triggering when compressing an animation which has had its Skeleton switched due to Skeletal Mesh Merging.
Bug Fix: Changed raw animation data evaluation so that Virtual Bone positions are built before interpolation is carried out instead of after.
This fixes a slight mismatch between the Virtual Bone positions in raw animation and compressed animation.
Bug Fix: Marker Sync position is now maintained across instant transitions.
Bug Fix: Animations with a scaled root bone now generate correct root motion.
Bug Fix: Changes Retarget Source hard reference to Soft Object Ptr, so that the Skeleton won't load all the references of Retarget Sources.
Bug Fix: Fixed Pose Asset retargeting not working for additive pose assets.
Bug Fix: Fixed issues with Pose Assets of additive blending using shortest path to blend, causing it to blend between rather than accumulating.
Bug Fix: Fixed Retarget Source for Pose Asset.
Changing Retarget Source in Pose Asset now works.
Pose Asset now saves original source pose, so that it can recalculate new pose when Retarget Source changes.
Retarget Source is used to bake the pose asset delta transform.
New: Native Animation Instances can now be set on Skeletal Mesh Components.
New: Anim Sequence Base is now entirely exported with ENGINE_API.
New: Added support for curve-only animation (required for facial pose retargeting).
New: Added inline time/frame settings to notify context menu.
Animation Blueprints
Crash Fix: Loading an Anim Instance with a re-instanced class no longer crashes.
Crash Fix: A crash no longer occurs when loading reparented Animation Blueprints that do not have an Anim Graph but are no longer Child Blueprints.
Crash Fix: Initialized Poses To Evaluate on Blend nodes to fix a crash from gathering stale data after reinitializing.
Bug Fix: Fixed "Modify Curve" Anim Node breaking certain nodes that were used before it (due to not caching bones correctly).
Bug Fix: Anim Dynamics correctly calculates simulation transforms and world vector transformations.
Bug Fix: Transition nodes no longer reference the wrong state node due to non-unique node GUIDs being introduced by copy and paste.
Bug Fix: Subinstance nodes inside states no longer fail to correctly create bridge variables on their skeleton class due to not correctly processing subgraphs in the Anim Blueprint compiler.
Bug Fix: Added a fix for animation reinitializing when replacing animation via Single Node Instance.
Bug Fix: Added code to correct duplicate node GUIDs during Anim Blueprint compilation.
New: Copy Pose node now supports copying curves as well as bone transforms. Enable Copy Curves if you want to copy curves (by default, this is not enabled).
New: Added ability for worker thread Animation Blueprint code to report to Message Log and added additive warning when playing an animation on an additive pose.
New: Added "Uncheck All Unconnected Pins" for button Break Struct nodes.
New: Added Native Begin Play function to Anim Instance.
New: Added ability to control whether or not a post process instance runs on a skeletal mesh component. Exposed both to Blueprints and anim editor suite
New: Added support for logical negation when copying to array properties in the fast path.
Deprecated: Get Scalar Parameter Default has been deprecated and replaced with Get Scalar Parameter Default Value.
Animation Tools
Crash Fix: Removing bones in a LOD on a Skeleton that has Virtual Bones no longer crashes.
Crash Fix: Crash no longer occurs when updating clothing paint mode after tab spawners have been destroyed by the hosting application.
Crash Fix: Attempting to zero out an array with the wrong size in Control Rig no longer crashes.
Crash Fix: Addressed a crash in Animation Editor menus caused by audio compression.
Crash Fix: No longer crashes when displaying a two bone IK gizmo for a node that hadn't had a chance to evaluate or had a zero alpha.
Bug Fix: Skeleton Tree now correctly no longer allows selection of filtered items.
Bug Fix: Bone modifications via transform type-in can now be undone.
Bug Fix: Fix to make sure Debug Bone Rendering of raw animation data in the Animation Editors actually shows the raw data when an animation uses additive track layers instead of showing the source.
Bug Fix: Fixed bad blend profile references from copy/paste operations on nodes using external profiles.
Bug Fix: Fixed Blend Profile properties not being correctly customized on anim nodes - and Animation Sequence references not being correctly filtered on anim nodes.
Bug Fix: Addressed Single Node Instances not getting ticked properly. Due to them not increasing UpdateCounter, and forcing a tick even if we're doing parallel ticking later.
Bug Fix: Removed recompress animation from curve track actions. This allows for smoother interaction on animations with a slow recompression time.
Bug Fix: Text is now red again in Anim Viewports.
Bug Fix: Fixed Set Root Motion Enabled Anim Data Modifier node (previously didn't set the enabled flag)
New: Exposed Sequence Recording Settings in Animation Editors.
New: Changed animation preview tooltips in Persona to stop loading the animation if it is not already loaded.
New: Tweaked debug display of Anim Sequences and added DeltaTime to AnimInstance debug.
New: The console command "showdebug animation" now shows current LOD and counters to know if Update/Eval/Cachebones/Init was called.
Also for URO settings, renamed Display Debug Custom to Display Debug Instance.
New: Added Anim Instance Display Debug Custom, to display custom debug info before AnimGraph display.
New: Removing a section when creating a clothing data entry will now just disable the section in the mesh, so that the operation is non-destructive to the mesh data.
New: Moved Live Link Message Bus Source heartbeats to their own thread using a new Heartbeat Manager singleton. This prevents sources from incorrectly being removed during Slate UI operations.
Import/Export
New: Refactored Facial Animation importing so it can be used from inside other modules.
Skeletal Mesh
Crash Fix: Undoing a change on a Skeletal Mesh that uses Virtual Bones no longer crashes.
Crash Fix: Setting a Master Pose Component on a Slave Component using the Rigidbody node during actor construction will not result in a crash.
Crash Fix: No longer crashes when clearing a Skeletal Mesh on a Skeletal Mesh Component with an active Post Process Anim Instance.
Bug Fix: Skeletal Mesh Component LOD info no longer cleans up all override resources and only cleans up one.
Bug Fix: Set the Used by Morph Target Material option to only the Materials affecting section deform by a Morph Target instead of all Materials used by the Skinned Mesh component.
Bug Fix: Multi-convex generation for Skeletal Meshes no longer uses incorrect index buffer.
Bug Fix: Correctly return whether a Skeletal Mesh section is shown or not when it has not been edited.
New: Exposed Show Material Section function to hide/show Skeletal Mesh material sections at runtime to Blueprint (also added Show All Material Sections and Is Material Section Shown functions).
New: Animation System owns previous Bone Transform with revision number that are used for motion blur or temporal AA, so that it can send to the render thread when RenderState is recreated.
New: You can set attached and parent to sync their LOD by setting Sync Attach Parent LOD to true.
Tools
Crash Fix: Skeletal Mesh Editor no longer crashes when importing more than 5 LODs.
Bug Fix: Refactored View menus in Animation Editors.
Bug Fix: Incorrect reporting of Animation Blueprint Fast Path is no longer caused by enabling the Compilation Manager.
Bug Fix: Compression Settings panel no longer allows "Use Adaptive Error" and "Use Adaptive Error 2" to be enabled at the same time.
Bug Fix: Ensured bounds inside of the Animation Editor(s) are correctly shown for Alembic Skeletal Meshes.
Bug Fix: Removed unused preferences from Persona Options.
Bug Fix: Persona Viewport settings are now per-asset editor.
New: Defaulted the Clothing Create menu to use the Skeletal Mesh Physics Asset.
New: All animation-related editors now have the ability to open up to 4 separate viewports onto the same scene, each with their own settings.
Added the ability to follow (and orbit) a specified bone as well.
Added a new "pinnable command list" widget to animation editors.
New: Preview buttons are now present where applicable inside the Animation Editors.
New: Added Mesh Paint settings to per-user settings and added preview vertex size so it's configurable by the user for use on differently sized meshes.
New: Auto-range feature added to cloth view ranges and a way for tools to extend those ranges when necessary.
New: Improved Skeletal Mesh Component debugging.
Extended the viewport text seen in Skeletal Mesh Editor.
Added lines for current cloth value.
New: Clothing visualizations can now be enabled while clothing paint mode is active.
New: Added a search bar to the Advanced Preview Scene settings panel.
Audio
Crash Fix: Fixed an issue with VOIP crashing on Mac.
Crash Fix: Fixed Ngs2 sources getting recycled while getting setup.
Crash Fix: Applied a fix for editing inherited audio component overrides in attenuation detail customization.
Crash Fix: Fixed a crash in the Audio Device function, Processing Pending Active Sound Stops.
Bug Fix: Fixed a case where audio is already set to be a preview sound. Sequencer sets the sound to be a preview sound when scrubbing in editor.
Bug Fix: Fixed a hang during multi-PIE shutdown when using a synth component.
Bug Fix: Sound mix fade time now fades audio properly.
Bug Fix: Fixed a bug during shutdown of audio mixer.
Bug Fix: The sample rate of synth components can now be overridden in C++.
Bug Fix: Fixed a bug in Android VOIP.
Bug Fix: Fixed a potential memory leak caused when starting and stopping a SynthComponent on the same frame.
Bug Fix: Fixed Ogg Vorbis 5.1 channel ordering channel maps in the Audio Mixer.
Bug Fix: Fixed an issue where audio sources were gently filtered near Nyquist when a source's LPF was set to max frequency in the new Audio Engine.
Bug Fix: Fixed a memory leak in the audio decompressor.
Bug Fix: Libvorbis DLL failing to load will now recover gracefully.
Bug Fix: Removed "Stopping sound already in the process of stopping" log spam.
Bug Fix: Envelope Attack and Release times are now setting properly on Synth Components.
Bug Fix: White Noise in the new Audio Engine no longer produces non-uniform noise.
Bug Fix: Fixed an issue in which the Delay Submix effect would allocate too much memory for delay lines.
Bug Fix: Recording audio in Recording Sequencer will no longer cause the Editor process to persist after shutdown.
Bug Fix: Fixed issues in wavetable sampleplayer synth component.
New: Added the ability for Submixes to define their own channel format. Currently, Stereo, Quad, 5.1, 7.1, and first order ambisonics are supported.
New: Made improvements to Unreal Engine 4's native VOIP implementation. Spatialization, Distance Attenuation, Reverb, and custom effects for a player's voice can now be controlled using the UVoipTalker Scene Component.
New: The new Microphone Capture Component enables feeding audio directly into a game for use with the Audio Engine and for driving in-game parameters.
New: Upgraded to Steam Audio Beta 10.
New: Added the ability to get envelope following Audio Components and Synth Components through BP delegates.
New: Added the ability for synthesizers to modify the requested sample rate on initialization.
New: Users can now have different Audio Plugin source settings for different Plugins.
New: Added a Panner Source Effect.
New: Sound Submixes and Sound Classes are now Blueprint types.
New: Added support for seeking with ADPCM.
New: Optimized Active Sound by not processing active sounds if they are out of range.
New: On Mac Media, audio playback uses the operating system media player mixer rather than mixing in Unreal Engine 4. This is currently only enabled on Mac, but may come to iOS in the future.
New: Oculus Audio Plugin has been updated. It now includes a Reverb Plugin, as well as support for Android and playing back Ambisonics files.
New: First-order Ambisonics files can now be imported into Unreal projects and played back using select Spatialization Plugins.
New: Added pre and post Source Effect Bus Send types. At this time, sending audio to post-source effects sends before distance-based effects, such as occlusion and distance attenuation are applied.
New: Multichannel file import is now supported.
New: The Allow Anyone To Destroy Me variable is now enabled on audio components.
New: The Resonance Audio Plugin for Unreal is now included as an Engine Plugin. The Resonance Audio Plugin supports binaural spatialization, reverberation, sound directivity, and ambisonics playback on Windows, iOS, Android, Linux, and Mac.
New: Audio Components can automatically manage attachment, opting in to cache their attach parent and attach socket. This enables them to attach to their cached parent only while playing audio, then detach after playback is completed. Set to false by default.
New: Optimizing the way audio stream chunks are serialized to take advantage of alignment optimizations on console.
New: Added a Blueprint function for enabling and disabling the Low Pass Filter on Audio Components, as well as setting the frequency of the filter.
New: Added "Owner" parameter to Play Sound At Location and Play Sound 2D functions to support "Limit to Owner" concurrency settings.
Automation
Crash Fix: Fixed a crash in Screen Shot Manager caused by an attempt to access a deleted String in an async lambda expression.
Bug Fix: Fixed issue with UnrealBuildTool and AutomationTool always being out of date, which forced a rebuild and made it impossible to debug operations dependent on them.
Bug Fix: Automated test screenshots now have more consistent overrides for gamma, eye adaption, dpi and other post process effects (regardless of capture mode).
Bug Fix: Fixed automation routines that list files in Perforce to return those whose last action was a rename.
New: Added automated test for per-platform blacklist defined in the relative platform engine ini file.
New: Automated test screenshots are now able to test against a hierarchical fallback platform and RHI.
New: Added button to automated test screenshot browser to add or replace all outstanding test reports (if appropriate).
New: Automated screenshot tests that fail due to size mismatch will now show a delta image, where mismatched bounds are colored red.
Blueprints
Crash Fix: Fixed inclusion of private plugin header files in nativized Blueprint C++ source.
Crash Fix: Resolved a potential crash that could occur prior to initiating a global "Find in Blueprints" indexing operation with source control disabled.
Crash Fix: Fixed a crash when using EDL to load certain data-only Blueprints that inherit from other Blueprint classes
Crash Fix: Fixed a crash on load in a nativized build caused by a reference to a Blueprint class containing a User-Defined Enum asset.
Crash Fix: Fixed a potential crash when collapsing nodes to a function when a potential entry pin had no links.
Crash Fix: Fixed a crash when using blueprinted components created by a specialized subclass of Blueprint.
Crash Fix: Added a guard fix against a reported crash when deleting a non-root scene component with no parent attachment.
Crash Fix: Prevented a reported crash when deleting one or more Actor components.
Crash Fix: Added a guard fix against a reported crash when setting bIsEditorOnly flag on attached scene components.
Crash Fix: Fixed an issue where split struct pins would not respect Allowed Classes on things like Slate Font Info, which could cause crashes when passing in the wrong type of asset
Crash Fix: Fixed occasional crashes when manually compiling simple Blueprints.
Crash Fix: Fixed several crash and data loss issues when undoing changes to user defined structs used in Data Tables
Bug Fix: In the Blueprint Material And Texture Nodes Plugin, clamping sampled HDR render target values by setting Range Compression Mode in the Read Surface Data Flags to RCM_MinMax now behaves as expected.
Bug Fix: User Defined Structures now store default values in the same way that native structs do, so they will now be correct in cooked builds and when added to containers. Changing the default value will also correctly propagate to unmodified instances in Blueprints or other user structs.
Bug Fix: Fixed missing Outer mapping at cook time on nested subobjects instanced with the New Object function in emitted C++ code during Blueprint nativization.
Bug Fix: Blueprint nativization now emits external plugin dependencies to the generated plugin descriptor file in order to mute UBT warnings at build time.
Bug Fix: Removed the Inverse Lerp node and redirected it to call Normalize To Range. Also Fixed incorrect value being returned by Normalize To Range when Range Min and Range Max are equal.
Bug Fix: Fixed a rare linker issue where Is Blueprint Finalization Pending would return true when trying to force-load subobjects that weren't ready to be loaded. This would prevent some placeholder objects from being replaced.
Bug Fix: We now ensure that "-build" is always passed to BuildCookRun automation workflows in the editor for projects configured with Blueprint nativization enabled so that it doesn't skip that stage.
Bug Fix: Revised both UAT/UBT-driven and custom code/data workflows to more seamlessly work with Blueprint nativization settings.
Bug Fix: Ed Graph Schema's Break Single Pin Link function is now correctly marked as a const function.
Bug Fix: Local variables can now be created in Blueprint function libraries.
Bug Fix: Fixed several issues with copy/pasting nodes that allow customizing which pins are visible.
Bug Fix: Light error sprite will no longer appear in Blueprint preview window.
Bug Fix: Copy-pasting a collapsed graph containing a map input will no longer result in a map with an unset value type.
Bug Fix: Prevented nativization from creating uncompilable code when using a non-referenceable term (e.g. subclass of, weak object pointer) in a select node.
Bug Fix: We now properly cast between mismatched soft pointer types in nativized Blueprint C++ assignment statements and function calls.
Bug Fix: Blueprints that are asynchronously loaded inside the editor are correctly registered with editor systems like the list of cast nodes.
Bug Fix: Text can be specified in Blueprint functions as default parameters.
Bug Fix: Fixed a serialization oversight in Ed Graph Pin that could lead to a compile error during Blueprint nativization.
Bug Fix: Added more descriptive log output after an attempt to break Blueprint execution on an unimplemented function/interface.
Bug Fix: Added Component nodes no longer lose their component type when another the component Blueprint is compiled
Bug Fix: Split struct pins now handle both user struct variable renaming and native Core Redirects variable renaming, the same way that Make/Break did before.
Bug Fix: Implemented Get Redirect Pin Names for async nodes. This fixes a bug where property redirectors on async nodes did not work correctly.
Bug Fix: Fixed issues where type would not be correctly propagated through multiple redirect nodes.
Bug Fix: User Defined Structures can now be imported from text using the human-readable field names, which fixes their use in nested arrays inside data table rows.
Bug Fix: User Defined Structures that are nested inside other user structs can now be safely modified. It is now safe to modify the inner struct's defaults without corrupting the outer struct, and those default changes will be correctly propagated to the outer struct.
Bug Fix: Blueprint nativization now correctly generates and builds code for "Client" builds
Bug Fix: Improvements to the Cpp backend to allow VC++ to compile nativized code more effectively
Bug Fix: Subobjects of assets are now correctly referenced by nativized code Disabled animation fast-path optimization when running a native anim BP, as native code is faster!
Bug Fix: Fix split pins on macro instances for resolved wildcard pins not resetting when disconnected
Bug Fix: Child Actor Template label in the details panel will now correctly reflect the selected class.
Bug Fix: Blueprints that have no parent class can now be reparented when using the compilation manager
Bug Fix: Harden pin serialization logic, specifically to prevent crash due to bad 'conform' routines when renaming functions
Bug Fix: Fixed ensure when initializing replication data for Blueprint generated classes
Bug Fix: Fixed missing marching ants on trace wires when stopped at a breakpoint in macro and collapsed source graphs.
Bug Fix: Fix issue where references from a component to its parent Blueprint would be cleared on compilation.
Bug Fix: Fixed an ensure when exiting PIE after tracing into a Blueprint Function Library source graph while single-stepping.
Bug Fix: Fixed incorrect access to nested instanced subobjects in nativized Blueprint class constructor code.
Bug Fix: Prevented debug object selection dropdown from displaying objects with pending-kill outers.
Bug Fix: Fixed a packaging build failure that could occur with Blueprint nativization enabled and EDL disabled.
Bug Fix: The proper syntax is now being emitted for set/map fields containing converted assets to C++ class headers when Blueprint nativization is enabled.
Bug Fix: Fixed a scoping issue for local instanced subobject references in nativized Blueprint C++ code.
Bug Fix: Fix several issues with the class pin on the Spawn Actor From Class node
Bug Fix: Added more verbose log output to assist with debugging any crash reports due to an invalid object reference detected in the BP event graph frame during garbage collection.
New: Added Call Stack controls for viewing Blueprint call stacks when paused at a breakpoint. These are available from the Developer Tools menu.
New: Improved Blueprint single-step debugging and breakpoints.
Added new Step Over (F10) and Step Out (ALT+SHIFT+F11) debugging features.
Added the ability to break directly on and step into/over/out of macro and collapsed graph nodes.
New: Updated numeric graph pins to use a numeric editor.
New: Validated Get option for Soft Object/Class references in Blueprints.
New: The Game Mode Base function Get Default Pawn Class For Controller can now be called in Blueprints.
New: Added an Editor Preferences setting to disable navigation to native functions from call function nodes.
New: Deprecated properties can still be accessed in Blueprints so long as they have Blueprint getters and setters. This creates a clean deprecation path for blueprint-accessed properties.
New: Added new Blueprint functions to query parameters from Material Instance Constant objects. Previously, these were only available on Material Instance Dynamic objects. The functions included are:
Get Scalar Parameter Value
Get Texture Parameter Value
Get Vector Parameter Value
New: Added a step to the compilation manager to skip recompilation of classes that are dependent on a given classes function signatures when those signatures have not changed. This greatly improves compile times for Blueprints with large numbers of dependencies.
New: Preserve graph editor widgets when clicking on a hyperlink, this speeds up navigation via the debugger "step" command and Find in Blueprints control - both are now more responsive.
New: Added Vector Length XY to Blueprint math library.
Core
Crash Fix: Fixed a crash on shutdown due to component visualizers not being cleaned up properly by the Unregister Component Visualizer function.
Crash Fix: Network platform file heartbeat function can no longer cause a crash by firing during async loading.
Crash Fix: Fixed a potential garbage collection crash caused by a stale pointer in Anim Instance Proxy.
Crash Fix: Fixed CVar access crashes with the async loading thread enabled.
Crash Fix: Fixed a crash in EDL loading when simple construction scripts and inherited component handlers were not being loaded early enough in complicated Blueprint hierarchies.
Crash Fix: Rntering a single quote in an ini field will no longer crash the editor or make a project fail to load.
Crash Fix: Dumping stats with massive callstacks no longer causes a crash.
Crash Fix: Fixed some crashes with the new Blueprint Garbage Collection weak reference feature (and possibly Hot Reload) by updating the archive's serialized property when serializing arrays, sets, or maps to point to the inner property.
Crash Fix: Fixed a crash that could happen when GC Objects were constructed and destroyed during engine shutdown.
Crash Fix: An editor startup crash when using cook-on-the-side has been fixed.
Crash Fix: Fixed an issue where packaged games that disabled the TCP/UDP messaging plugins would crash on startup in shipping configuration.
Crash Fix: Fixed a crash when the engine was reporting package errors.
Crash Fix: Running DDC commandlet with an invalid package name no longer causes crashes.
Bug Fix: Multicast Delegate Property's Identical function now has the same behavior as Delegate Property's Identical function. This prevents it from giving incorrect results and causing serialization errors.
Bug Fix: The Asset Registry module is now always preloaded on startup, even if EDL is disabled.
Bug Fix: Fixed Thread Sanitizer warnings in stats system. Optimized Cycle Counter's Start function to only read the stat name once (atomically).
Bug Fix: Fix ambiguities in calculating root directory.
Bug Fix: When serializing a compressed archive with multithreaded compression enabled, the engine will wait for the oldest async task instead of spinning.
Bug Fix: When Async time limit is set to a low value, the linker will no longer get stuck in a loop of reloading the preload dependencies.
Bug Fix: Fixed a small race condition with config data in Low Level Mem Tracker.
Bug Fix: Fixed some data races in Compression's stats.
Bug Fix: Shared Ptr, Shared Ref and Weak Ptr assignment operators are now implemented as copy-and-swap, to avoid bad states being visible to any destructors being called.
Bug Fix: Added atomic reads to Shared Ptr, Shared Ref, and Weak Ptr in Thread Safe mode.
Bug Fix: Used new Platform Atomics function, AtomicRead, in Thread Safe Counter‘s Get Value function.
Bug Fix: Fixed thread-unsafe sharing of a static array in the String function, Parse Into Array WS.
Bug Fix: Fixed a regression in hot reload speed accidentally introduced in 4.17, caused by rerunning UnrealHeaderTool redundantly if a source file has changed in the same folder as a header file which contains at least one UCLASS. This has also been fixed in non-hot-reload compiles.
Bug Fix: Removed a redundant call to Update Vertices when creating a level's model components, which was causing a race condition with the renderer thread.
Bug Fix: Suppressed Thread Sanitizer warnings in the Async Read Request interface.
Bug Fix: Fixed the debugger visualization of Array with an Inline Allocator or Fixed Allocator. Improved the visualization of Sparse Array.
Bug Fix: Fixed the wrong "P_GET_" macro being used in the generated code for Soft Class Ptr properties.
Bug Fix: Removed a redundant class access modifier from Queue.
Bug Fix: Freed the stats thread on shutdown.
Bug Fix: Is Integral now correctly reports true for int64 on 32-bit platforms.
Bug Fix: Fixed truncation errors in the parsing of very large integer values in FJsonObject::GetIntegerField().
Bug Fix: Unreal Header Tool's error reporting now specifies the correct source file when a header doesn't specify a .generated.h as its last #include.
Bug Fix: Deleted an unnecessary specialization of Make Array View.
Bug Fix: Removed String's char pointer constructor's confusing void pointer parameter.
Bug Fix: Base64's Encode function is now const-correct.
Bug Fix: Json Value no longer stringifies whole numbers as floats.
Bug Fix: Relaxed enum property importing to allow valid integer values to be imported too. This was previously made more strict, which caused a regression in Data Table importing.
Bug Fix: Fixed memory leaks for pipe writes and added data pipe writes.
Bug Fix: Fast XML now properly parses attributes with legally unescaped single and double quote characters
Bug Fix: Zipping in UAT now always uses UTF-8 encoding to prevent Unicode issues.
Bug Fix: Fixed pure-virtual function call in Multicast Delegate Property.
Bug Fix: Fixed console output not working when using "-log" in a shipping build.
Bug Fix: Loading maps via content browser or async load in the editor no longer leads to their references not loading properly.
Bug Fix: Fixed an issue where deleting a source package would not cause the generated cooked package to get deleted while doing an incremental build.
Bug Fix: Enum redirects handle typedef enums properly when a value redirect is not also specified. Enum classes continue to work correctly.
Bug Fix: Fixed an issue where the Streamable Manager could return partially loaded objects when called from Post Load. It will now schedule a callback for when those objects finish loading instead.
Bug Fix: Comparing NAME_None to an empty string, or a Name constructed from an empty string, now correctly returns true.
Bug Fix: Fixed cmake include directory generation putting engine include files in the game root.
Bug Fix: Fixed cases where an include path that is external to both the engine and game root were incorrectly getting the game root prepended.
Bug Fix: Many fixes were implemented for Thread Sanitizer warnings.
Bug Fix: Fixed build command line in generated projects missing a space before the compiler version override.
Bug Fix: If UAT is invoked with platforms in both the "-platform" and "-targetplatform" command-line switches, we will now build using all of them rather than just the ones in "-targetplatform".
Bug Fix: Fixed cases where String's Printf function was used without a formatting string.
Bug Fix: Updated loading movie code to properly calculate delta time when throttled at 60 frames per second.
Bug Fix: Fixed an inconsistency with how Soft Object Ptr case is managed between Linker Save and Archive Save Tag Imports, which could cause a cook ensure under some circumstances.
Bug Fix: Fixed an occasional loading assert in Script Struct's Post Load function caused by a race condition on the Prepare CPP Struct Ops Completed variable.
Bug Fix: Changed Visual Studio Code project file generation to use "taskName" rather than "label" for C# build task names.
Bug Fix: Improved whitespace handling in both Map Property's and Set Property's "ImportText_Internal" function. Emptied the map or set if an error occurs during a text import of a property to avoid bad container states.
Bug Fix: Fixed a potential buffer overrun in Set Property.
Bug Fix: Map Property's Import Text function now correctly skips duplicate keys.
Bug Fix: Unreal Build Tool now enforces Development and Debug configurations in existing C# projects. Fixes command-line builds and prevents solution from being marked dirty on initial load.
Bug Fix: Cleaned up delegates in UObjectGlobals.h, fixing several incorrect comments and moving some editor delegates into WITH_EDITOR.
Bug Fix: Fixed an optimization issue with CPU benchmarking. Also added better support for debugging local rocket builds.
Bug Fix: Fixed the final enumerators being missing from the generated FOREACH_ENUM_ macros.
Bug Fix: Parallel .pak generation now uses the number of reported CPU cores. Also fixed an issue with concurrent .ini file reading.
Bug Fix: Dedicated servers do not include RHI modules except NullRHI.
Bug Fix: Changed String's Parse Into Array function to use Reset instead of Empty on the passed in array, allowing it to be approximately resized
Bug Fix: Fixed accounting for Curve Table assets in "obj list".
Bug Fix: Fixed a bug that caused the engine to fail to identify a running instance of Visual Studio as having the current solution open.
Bug Fix: Fixed the "quote issue" with foreign project build tasks on PC.
Bug Fix: Fixed limit on number of async precache tasks while cooking, due to counter not being updated correctly.
Bug Fix: The Windows version string is now determined correctly.
Bug Fix: Asset registry memory usage reported by memreport is now correct.
Bug Fix: Fixed Platform File Pak interface's Find Files Recursively function not correctly calling the base class implementation.
Bug Fix: Literal soft object pins are correctly cooked and tracked in the reference viewer. This may cause assets referenced by disabled nodes to be cooked into your project. In this case, the reference viewer will track which Blueprints are causing this.
Bug Fix: The serializing object will now be logged during reachability analysis in the event that an invalid object reference is detected during a garbage collection pass.
Bug Fix: Hot reloads no longer occur during PIE, which was causing bad states in Blueprint types when they are reinstanced.
New: Encryption/Signing Key Generator. A plugin which allows the automatic creation and configuration of in-engine data cryptography through the project settings, under the "crypto" header.
Added options to plugin settings which allow encryption to be used on the various asset files in a pak. Encryption can be used on just the .uasset file, which contains only the package header information, or the .uexp and .ubulk files too, which contain the rest of the package data. Encryption on .uasset files provides a good compromise between data security and runtime performance.
New: Added Level Streaming CVars to customize streaming for your project. These all default to enabled, but can be disabled to make different trade offs for memory and level loading performance.
"s.ForceGCAfterLevelStreamedOut" - Whether to force a GC after levels are streamed out to instantly reclaim the memory, at the cost of a hitch. Defaults to on.
"s.ContinuouslyIncrementalGCWhileLevelsPendingPurge" - Whether to repeatedly kick off incremental GC when there are levels still waiting to be purged. Defaults to on.
"s.AllowLevelRequestsWhileAsyncLoadingInMatch" - Enables level streaming requests while async loading (of anything) while the match is already in progress and no loading screen is up. Defaults to on.
Added several new stats for potentially slow code.
Added SCOPE_CYCLE_UOBJECT macro, instead of using Scope Cycle Counter UObject directly.
New: Cheat cvars are now usable in Test builds by default. Can be overridden by defining ALLOW_CHEAT_CVARS_IN_TEST as 0 in Target.cs files.
New: Make Weak Object Ptr function, which constructs a Weak Object Ptr from a C++ pointer by deducing its type. Fixed Weak Object Ptr's constructor which implicitly removed const.
New: Upgraded the Array function, Insert, to allow it to take an array with any allocator type. It can also move elements out of another Array.
New: Array now has a "_GetRef" variants for many of its functions, which will create an element and return a reference to it rather than its index in the Array. These functions are: "Insert_GetRef", "InsertZeroed_GetRef", "InsertDefaulted_GetRef", "Emplace_GetRef", "EmplaceAt_GetRef", "Add_GetRef", "AddZeroed_GetRef" and "AddDefaulted_GetRef".
New: There is a new Multi Map function, Append, to match the Map class' function of the same name.
New: Base64 encoding library now supports ANSI/wide characters, decoding streams without padding marks, and decoding into preallocated buffers without writing dummy padding bytes.
New: Integer support added to Is Aligned.
New: Static asserts to stop alignment functions (Align(), IsAligned(), etc.) being called with non-intergal, non-pointer types.
New: Visual Studio Code project file generation now creates a .code-workspace for the workspace. Allows foreign projects to "mount" the UE4 folder so that the engine tasks are available, and all engine source is visible to VSCode for searching purposes
New: Visual Studio Code - Add launch task for generating project files for the given folder
New: Ensure that all branches within Generic Platform Misc's Root Dir function produce an absolute path with no duplicate slashes. Remove relative-to-absolute conversion of the root directory from the Paths function, Make Standard Filename, now that we know Root Dir always returns an absolute path.
New: Added Net Serialize to Date Time.
New: Vector 4 now features a -= operator.
New: Allowed Visit Tuple Elements to take multiple tuples of the same size. These are now iterated in parallel and will invoke the visitor with multiple arguments.
New: Minimal Name now has an Is None function.
New: Relaxed memory ordering atomic operations.
New: Atomic object wrapper has been created. This guarantees atomic access to the underlying object, with additional memory ordering guarantees.
New: Platform Atomics now has an Atomic Read function.
New: Added overloads of atomic functions to handle int8, int16, and int64, in addition to the existing int32.
Switched Android and Linux builds over to use the Clang implementation.
New: Command-line option "-NOEDL" will disable EDL.
New: Command-line "-diffonly" mode added to cook commandlet. This mode compares packages on disk that have already been cooked with packages currently being cooked to detect deterministic cook issues.
New: Added As Const helper function, for const-ifying pointers and references.
New: Checks to prevent Flush Async Loading and Load Object / Load Package from being called from any threads other than the game thread.
New: Tickable Base Objects can now avoid calling Is Tickable every frame if they will never or always be ticked.
New: Denormalized values are now allowed when converting float32 to float16.
New: Added static asserts to Logf and checkf to ensure that they're always called with a literal formatting string.
New: Cook metadata like DevelopmentAssetRegistry.bin will no longer be packed into a shipping game.
New: Added a new target rule, "Use Inlining", to UnrealBuildTool to disable inlining for debugging purposes - defaults to true.
New: Opened .pak files can be listed with the console command "DumpOpenedFiles".
New: Added a new command line option, "-statnamedevents", for enabling named events.
New: Json Object Converter now supports sets in its Json Value To UProperty function.
New: Mutable lambda functions can now be bound to delegates.
New: To go along with changes to the Size Map tool, the way that ResourceSize is calculated has changed. Exclusive is the same but Inclusive is now EstimatedTotal and represents the total size of the object and all of its sub objects, but not any external assets it references. It now tries to estimate the size on a generic cooked platform so it will handle texture LOD bias better.
New: The Platform Stack Walk function, Capture Stack Backtrace, now returns the stack size rather than checking for the first null pointer, to prevent truncated callstacks if parts of the stack are zeroed out.
New: Improved "caching file" message in networkplatformfile so it says "Requesting file..." and is only output when we actually request the file from the server.
New: Improved Variant's support for move semantics. Switched Variant Types over to an enum class to aid debugger visualization.
New: When cooking with Generate Chunks and the asset manager enabled, AllChunksInfo.csv is now written out to the CookMetadata directory. This file can be used to look for historical chunk size changes, and it now uses the per-platform cooked size.
New: Added more debug info for ensure in Linker Save when a name that has not yet been mapped is being serialized.
New: Tweaked "LogInit" command-line logging to clarify when the command line is incomplete due to some arguments being filtered out.
New: Promoted Statistical Float (a utility class to track basic statistics of a real number data series with constant storage) into Core from FunctionalTesting
New: Made optimizations to Paths function, Is Relative.
New: Improved cook performance by optimizing the EDL assumption verification code.
New: Removed all locks from EDL Cook Checker to improve Save Package performance.
New: Optimized adding and removing objects to GC Object Referencer.
New: Switched our allocator to ANSI when running a Thread Sanitizer build, for better error reporting.
New: Improved filename to long package name resolution when providing a relative path.
Removed: Move-unaware allocators are no longer usable with containers.
Removed: Startup Packages have been removed, along with the associated code.
Removed: Removed deprecated delegate functions.
Editor
Crash Fix: Fixed a crash when opening the level viewport context menu if the actor-component selection is out of sync.
Crash Fix: Details panel no longer crashes after compiling Blueprints that have edit condition properties.
Crash Fix: The .fbx scene importer no longer causes a morph target import crash.
Bug Fix: Added a value threshold for Paint and Fill values while using the Cloth Painter to prevent negative numbers.
Bug Fix: Fixed Camera Rig Rail Preview Meshes to match Spline Component .
Bug Fix: Added ".vscode" folder to generated .gitignore file for new projects.
Bug Fix: Fixed an issue with content browser columns not sorting properly.
Bug Fix: Made the font editor canvas respect DPI scale.
Bug Fix: Fixed a crash when dragging and dropping a non-array UI element onto array element..
Bug Fix: Fixed "Game Gets Mouse Control" setting.
Bug Fix: Duplicate actors are no longer created for USD primitives that specify a custom actor class.
Bug Fix: Moving a copy of an Actor to a different persistent level or sublevel now only dirties the target level.
Bug Fix: Clamped "Assets to Load at Once Before Warning" so it cannot be set below 1.
Bug Fix: Fixed auto import to import into the correct map folder.
Bug Fix: Fixed reordering of materials when importing a LOD containing new materials.
Bug Fix: Fixed a crash using Actor Details with the struct details panel.
Bug Fix: Fixed .fbx importer scale not always applying.
Bug Fix: The "Add New C++ Class" dialog now remembers the previously-selected module.
Bug Fix: Source Control Submit Files now interprets Escape key as if the user clicked "cancel".
Bug Fix: Fixed a leak in the Git plugin.
Bug Fix: Fixed Resave Packages working poorly with projects on other Windows drives with names like X, Y, Z.
Bug Fix: Prevented hit proxies from being rendered for cursor query when the cursor is not visible.
Bug Fix: Source control submenu menu customization
Bug Fix: Fixed window size and position adjustment not accounting for primary monitor not being a high DPI monitor when a secondary monitor is.
Bug Fix: Expanded array elements no longer collapse after reordering with drag-and-drop.
Bug Fix: Fixed a bug where Undo History would be cleared up when starting a multiplayer game in editor with 2 or more players.
Bug Fix: Fixed a crash when additional level viewport sprites are added after the level editor module is loaded.
Bug Fix: Fixed pending netgame travels in Play in Editor games erroneously stating there was a problem browsing to the pending map.
Bug Fix: Opening a color picker no longer causes values to change each time.
Bug Fix: Source control module now properly allows you to select a workspace when a dialog is up (e.g. the save asset dialog).
Bug Fix: Fixed a bug that caused Visual Studio to open repeatedly.
Bug Fix: Prevented importing more than MAX_SKELETAL_MESH_LODS when importing a skeletal mesh.
Bug Fix: Array reordering in a Blueprint now propagates to placed instances.
Bug Fix: Fixed a crash that could happen while hot-reloading content in the editor.
Bug Fix: Fixed levels not being lockable or unlockable if they are not checked out.
Bug Fix: Fixed HLOD auto-clustering incorrectly including transient and template actors.
Bug Fix: The Enter key now works as expected for details panel asset pickers.
Bug Fix: Removed circular referencing to parent window when creating an asset from an object.
Bug Fix: Perforce Plugin: Removed all calls to methods that would store passwords on the users local machine.
New: Added "favorite folders" to the content browser. You can add or remove folders from the Favorites list in the folder's right-click context menu, and hide or show the Favorites list in the Content Browser options.
New: Added support for the replay (demo) system, both record and playback, in PIE sessions. Demos from PIE sessions are compatible with those made in standalone instances, and vice versa.
New: Added the ability to have conditionally overridden HLOD settings from project-global defaults.
New: Created an Actor Component to decode Linear timecodes from an audio source. Outputs timecode, or timecode event.
New: Added an editor setting for the shared DDC location to use.
New: Improved cook process asset error handling including content browser links and better failure notification
New: Added support for canvas-based DPI scaling.
New: If a shared DDC is not being used with Perforce, a notification with a link on how to setup a shared DDC will be presented.
New: Unattended commandlets can now rename and save packages.
New: Added camera speed scalar setting and Toolbar UI to increase range on camera speed presets.
New: Added a setting to enable/disable high DPI support in editor.
New: Replaced the skeletal mesh import option "keep overlapping vertex" with thresholds, enabling the user to control the vertices' welding thresholds.
New: The Persistent Level in the Levels tab can now be locked.
New: Added the Auto Checkout function to Asset Rename Manager for commandlet usage.
New: The realtime level editor viewport is disabled if running under remote desktop.
New: Added the project setting "bValidateUnloadedSoftActorReferences" to the Blueprints section. This setting is true by default. Setting it to false it will disable the code that loads assets to check for references when deleting/renaming an actor.
New: Added documentation tooltips showing type information to header columns of DataTables.
New: Added Rename Assets With Dialog to Asset Tools. All previous implementations now use the "With Dialog" version. Delete Object and Duplicate Asset can be completed without dialog.
New: Added a variable called GIsRunningUnattendedScript. If true, we are running an editor script that should not prompt any modal dialog. Can be set for commandlets with "-RUNNINGUNATTENDEDSCRIPT" command-line option.
New: Many improvements to the Asset Audit, Size Map, and Reference Viewer tools to support their use in analyzing cooked content
New: Added a function. Get All Content Browser Command Extenders to Content Browser Module that supports registering commands/keybinds to extend the content browser via plugins.
New: Added "Don't show this again" checkbox to warning about deleting actors referenced by HLOD Actors.
New: Added new options to the "ResavePackages" commandlet when using "-BuildHLOD":
"-ForceEnableHLOD" turns on Hierarchical LOD in the level automatically.
"-ForceSingleCluster" turns on the Force Single Cluster for Level" option automatically.
"-ForceHLODSetupAsset=
" clobbers the level's customized HLOD asset with the specified one..
"-SkipToMap" will resume generating HLODs at the specified map name (going in the order of map files found while searching folders).
New: In the glTF importer plugin: Import Static Mesh, Material, and Texture assets from the Khronos glTF 2.0 format. Text (.gltf) and Crash Fix: Fixed frequent crash when opening Config Editor more than once per session.
New: Added support for binary (.glb) files are supported.
New: Improved morph target import time.
Removed: Removed .fbx export file version 2010 compatibility. The 2018 .fbx SDK does not export to 2010 and earlier file formats.
Content Browser
Crash Fix: Fixed a crash when renaming an asset without an enabled source control via the Asset Tool.
Crash Fix: Fixed a crash that could happen when content hot-reloading a Level with streaming sub-Levels.
Bug Fix: Fixed the asset search suggestions list closing if you clicked on its scrollbar.
Bug Fix: Selection no longer resets when changing certain view settings.
Bug Fix: Fixed the row structure tag not appearing for Data Table assets.
Bug Fix: Fixed an issue so that we verify a folder is newly created before removing from the tree list.
Bug Fix: Fixed an issue with dragging and dropping folders.
New: Added a Clear option for the Content Browser folder search box when using the escape key.
New: Improved responsiveness of Open Asset dialog and when searching or filtering assets.
New: Added the ability for the Reference Viewer to show and copy references lists for nodes with multiple objects, or multiple selected nodes.
Foliage
Crash Fix: Fixed a crash for Instanced Static meshes and Foliage when used with a Static Mesh that had an invalid lightmap coordinate index set.
Crash Fix: Fixed a crash that could happen when replacing placed instances with another asset type.
Bug Fix: We now properly update random streams for Static Mesh instancing only when adding new instances or from saved data.
Bug Fix: Fixed a Static Mesh Instancing issue where the proper index would not be removed after multiple add and removes.
Bug Fix: When duplicating Static Mesh Instances through the Details panel, the source transform is now correctly used for the result.
Bug Fix: Fixed an issue with a special case where building a Blueprint that used a Foliage type would remove all visible Foliage already painted.
Bug Fix: When moving Foliage from one level to another, if there was an error during the process, it would not abort the process gracefully.
Bug Fix: Fixed a case where having very low FPS would cause the foliage to disappear for a few frames while moving the camera.
Bug Fix: Fixed an issue that prevented foliage addition during PIE or Simulate mode.
Bug Fix: Fixed a special case where adding and removing multiple times would cause the regeneration of random streams for all changed instances.
Landscape
Crash Fix: Fixed a crash when updating Landscape grass.
Crash Fix: Fixed a crash where you would use the Spline Component with no point specified.
Crash Fix: Fixed a case where it was possible that doing undo on Landscape would cause an assert or crash.
Bug Fix: Fixed an issue that could cause Landscape shaders to be recompiled very often during the cook process.
New: Added the possibility to update the Landscape Material at runtime using Material Instance Dynamic Materials.
Material Editor
Crash Fix: Fixed a crash when filtering Material Layers drop-down with no set value.
Crash Fix: Fixed a crash caused by the Layered Material property widget of the Material Instance Editor.
Crash Fix: Fixed an assert on expanding vector param in Layered Material.
Bug Fix: Fixed an issue to not show empty Global group if it only contained the Material Layer parameter.
Bug Fix: Fixed an issue with Details panels to now "own" a color picker so that a different Detail panel refreshing doesn't close it. This fixed the refreshing state of the graph after changing the texture or color parameter values.
Bug Fix: Fixed an issue to allow visible parameters parameter retrieval to correctly recurse through internally called functions. Previous checks were intended to prevent function previews from leaving their graph through unhooked inputs but unintentionally blocked all function inputs.
Bug Fix: Fixed an issue with Material overrides to display the actually used values even when greyed out.
Bug Fix: Fixed a reset to default issue on Material Layer Parameter node and layers.
Bug Fix: Changing a parameter in the Material Parameter Panel now dirties the Material correctly. Changing a layer parameter in the Material Instance Editor now refreshes the details panel as well.
Bug Fix: Fixed an issue with thumbnails for Material Function Instances not being able to delete new function instances.
Bug Fix: Empty Material groups are now being pruned correctly.
Bug Fix: Toggling "Show Background" now updates the background correctly.
Bug Fix: Material window now uses the background color set in preview scene.
Bug Fix: Fixed an issue to properly display the Material node preview label is now centered on HiDPI screens.
New: Turned on Material Layering by default.
Adding parent field and asset field to Material Layers and Material Layer Blends. Setting the parent restricts the assets available, and setting the layer or blend asset fills in the parent as well.
Added Content Browser filters for Material Layers, Material Layer Instances, Material Blends, and Material Blend Instances.
New: Added a Parameter Defaults tab to the Material Editor so users can see and change the default value of all the parameter nodes currently in the Material Graph, also added Parameter Defaults tab support for Material Functions.
New: Save to Child and Save To Sibling buttons now work for material layer parameters and are shown on the material layer parameter panel.
New: Creating a new parameter immediately allows naming.
New: Added axis input to the material editor's viewport client to allow rotating the preview viewport.
New: Material Blends now have plane mesh previews in their icons.
New: Added Material Layers Parameters Preview (all editing disabled) panel to the Material Editor.
New: Layers in Material Layer Functions can now be renamed, except Layer 0, which is always named "Background".
New: Added access to a Material Function Instance's parent in the Material Instance Editor.
New: Disabled being able to create a layer or blend in the asset drop-down, sections of the stack that have been disabled now inactivate that part of the UI.
New: The create a Layer or Blend in asset drop-down menu has been disabled. Sections of the stack that have been disabled are no inactive for that part of the UI.
New: Create Function Instance now indicates if you are making a Layer or a Blend.
New: Updated the optional example Layers and Blends checkbox in the Experimental Settings so that it makes basic Layers and Blends when you create new assets.
New: The Parent field in Layer and Blend Instances is now editable.
New: Any applicable Layer and Blend compile warnings now also appear in the Layer or Blend itself. We enforce only two layers in a blend and prevent users from deleting Layer or Blend output nodes.
Removed: Parent drop-down has been removed from Layers and Clends. A filter button has been added instead.
Media Framework
Crash Fix: Fixed a crash with Media assets in Media Sound Component's "Update Player."
Crash Fix: Fixed a crash when opening AVF Media after opening multiple media players.
Crash Fix: Fixed a crash due to a race condition by adding a missing scope lock to WMF Media.
Crash Fix: Fixed a crash for media textures that could happen if the media player is generated from GC clustered Blueprint.
Bug Fix: Fixed external textures not registering correctly in all cases for the Engine.
Bug Fix: Fixed Media Asset issue with Media Player's Previous and Next not playing automatically if the player was playing previously.
Bug Fix: Fixed an issue with image media player never finishing initialization if loading failed.
Bug Fix: Fixed an ensure when cooking projects that use Image Media Source.
Bug Fix: Fixed media sessions that were no generating Playback End Reached event when session forced to stop for WMF Media.
Bug Fix: Fixed an issue with video capture on Windows 7 where the feed would display black or error out when selecting a different track and format.
Bug Fix: Fixed issues with tracks being reported in reverse order.
Bug Fix: Fixed an issue where external textures referenced by a Material before being associate with a Media Player never having their uniform expressions reached.
New: Added simplified track switching to enable seeking.
New: Added a notification if no video track is available or selected to the Media Player Editor.
New: Added the ability to disable video output when no video track is selected for Image Media.
New: Added verbose logging to the Media Player BP functions for Media Assets.
Removed: Looping option has been removed from playlists for Media Assets.
Sequencer
Crash Fix: Fixed a copy/paste crash that could happen when only processing Movie Scene Copyable Binding and objects that can be spawned by the movie scene spawn were registered.
Crash Fix: Fixed a crash when an existing hotspot is null.
Crash Fix: Fixed a crash when pasting a camera cut track that happened when the outer track isn't set to the movie scene.
Crash Fix: Prevented a crash by ensuring that the null object in property actuate when there's a track without a binding object. This occured when pasting a property track to the root.
Crash Fix: Fixed a crash when dragging a Level Sequence into the tree area.
Crash Fix: Fixed a crash caused by erroneous persistent references to FSequencer.
Crash Fix: Fixed a crash when calling Set Playback Position in an event when adding Set Playback Position as a latent action.
Crash Fix: Prevented a reload crash that could happen if a Sequence was being recorded in a currently opened Sequencer. On Recording Started and On Recording Finished are now set to multicast delegates.
Crash Fix: Added a nullptr check to fix a crash in Actor Sequence when it doesn't exist.
Crash Fix: Fixed a crash opening a new Sequence while another is still active.
Crash Fix: Fixed a crash when an Actor Factory is not found.
Crash Fix: Added null checks to prevent a crash when saving the default state of a spawnable.
Bug Fix: Fixed an issue to unregister missing track editors in Movie Scene Tools.
Bug Fix: Fixed an issue with quaternions > rotation > quaternions so that unwinding is only processed if the last transform is valid.
Bug Fix: Fixed a rotation rotator to quaternion to rotator conversion which prevents you from typing in a rotation of 0,0,320 into the Key Editor. Unwind rotations from the previous transform to the current transform so that the nearest rotation is set rather than the rotator to quaternion to rotator conversion.
Bug Fix: Sequencer: Set min/max values for generic key area so that they don't default to 0,10.
Bug Fix: Fixed an issue returning unhandled only if not dragged. There was an issue where dragging in the track area would sometimes leave the handled state with the time slider controller to not allow you to pop up a menu with the movement tool.
Bug Fix: Fixed a problem to now defer Details panel updates on scrubbing and playing.
Bug Fix: Fixed an issue by adding a null check and warning for invalid Get Parameter Collection Instance.
Bug Fix: Fixed an issue to set row index when creating a new take
Bug Fix: Fixed issues with pre-edit property chain broadcast so that the property path will include possible struct/array node.
Bug Fix: Fixed all light colors being set if any color property is animated.
Bug Fix: Fixed spawnables not playing back when Editor Preview Actor was set to false for Sequencer spawnables so that Begin Play doesn't get skipped.
Bug Fix: Fixed an issue to reset drop node on drag leave.
Bug Fix: Fixed a delay at shot boundaries causing Sequences to not play back and render out. The shot ID needs to be tracked to determine whether a new shot is encountered.
Bug Fix: Fixed a delay before warm up causing Sequences not to render.
Bug Fix: Fixed an issue to enable the camera cut track when popping back to the master only if there's a camera cut track in the Master Sequence. It addresses an issue where if you don't have a camera cut track in the Master, the camera gets locked to a camera cut in a sub-scene and you can't toggle out of it.
Bug Fix: Pivot locations are now updated for a selection when closing Sequencer.
Bug Fix: Fixed an overlap issue by looking at the key behind it. It addresses if there's 3 keyframes on consecutive frames, so if you zoom out, you should see two bordered keys when the overlap threshold is passed.
Bug Fix: Fixed UMG animations not working with Blueprint Nativization.
Bug Fix: Fixed an issue in Sequencer to remove null tracks on object bindings. Tracks can become null if they're from a plugin and the plugin is disabled.
Bug Fix: Moved some commands out of the generic Sequencer command bindings so that they don't take over the viewport. For example, end for "Snap to Floor" should still function in the viewport.
Bug Fix: Fixed an issue to restore pre-animated state when changing active channels.
Bug Fix: Fixed an issue to suspend broadcast of selection delegates when performing multiple operations.
Bug Fix: Display names for Shot are now switched to FString so that it's not localized.
Bug Fix: Fixed an issue with Sequence Recorder to close the target animation asset editor if it exists before recording into it.
Bug Fix: Fixed an issue with Sequence Recorder to record to the specified target animation for the target Actor only. Newly tracked components will have newly created animations so that they don't record to the same target animation assets.
Bug Fix: Sequencer now uses drag and drop actor factory if specified. This ensures that the correct Actor Factory is used in creating the object template for the Sequencer spawner. It fixes some spawnables not getting created properly (for example, an Empty Actor).
Bug Fix: We prevent throttling on the Curve Editor so that the Editor world tick can apply. It fixes the viewport not updating sometimes when adjusting curves in the Curve Editor.
Bug Fix: Fixed an issue where parameters were not used.
Bug Fix: Fixed WMF Media for IYUV encoded AVIs not playing correctly.
Bug Fix: Fixed WMF Media that caused H.265 frames to be dropped due to false negative buffer size checks.
Bug Fix: Fixed an issue in Sequencer to always save the default spawnable states regardless of the focused Sequence. Addresses issue if you step back to the Master Sequence (and the spawnable if it still exists) and then scrubs outside the region where the spawnable exists. It gets destroyed but saved default spawnable state doesn't get called because it's no longer the focused Sequence.
Bug Fix: Fixed Sequencer viewport invalidation so that it happens on Sequence evaluation.
Bug Fix: Fixed an issue when scrolling in track area.
Bug Fix: Fixed an issue with Sequence Recorder causing duplicate Actor triggers before playing so that the Sequence can be recorded and played back at the same time.
Bug Fix: Fixed an issue with Sequence Recorder when recording spawnables not getting the correct position for being spawned at.
Bug Fix: Fixed an issue with Sequence Recorder to record spawn Actors immediately so that they won't be missed if they're deleted before Tick.
Bug Fix: Fixed a problem where Sequencer was not updating after toggling Bind Sequencer to PIE/Simulate while PIE is active.
Bug Fix: Addressed an issue in Sequencer to find available non-overlapping row index when adding sub-sections.
Bug Fix: Fixed an issue with Sequence Recorder to address Level Sequences no triggering when recording. Level Sequences would not get recorded if the World Settings Actor was not recorded.
Bug Fix: Fixed an issue in Sequence Recorder causing Editor-only components to be recorded.
Bug Fix: Fixed missing display text for audio and video tracks when switching media.
Bug Fix: Fixed a slot animation issue where it was not being restored for montages that are recreated during evaluation.
Bug Fix: Fixed the import camera in Sequencer so that when new Cameras are created, values from the FBX are going to the newly created Cameras. Also, added Reduce Keys and Reduce Keys Tolerance to import FBX options.
Bug Fix: Fixed an issue in Sequencer when exporting unmatched float properties to custom attributes.
Bug Fix: Fixed an issue in Sequencer to override the animation asset in the player state if it doesn't match the animation asset being evaluated.
Bug Fix: Fixed an issue in Sequencer so that we don't update current time to be within the view range when stepping into a Sequence.
Bug Fix: Fixed an issue to skip binding IDs if the Sequence can't be found.
New: Added some options to the Curve Editor for Modify Owner Change to the Curve Owner Interface. We call mark as changed when modifying keys or tangents.
New: We now add a transaction for easing length.
New: Added Copy, cut, paste, and duplicate options to object bindings.
New: Added the ability to drag sections up in Sequencer.
New: Sequencer can now compile on the fly.
It is now able to compile partially or completely out-of-date evaluation templates from the source data as needed. It affords much more efficient compilation when working within Sequencer.
The concept of "instance data" for sub-Sequences, available through Movie Scene Player interface or persistent data stores. It replaces the compilation of specific templates for control rig templates.
Moved sub-tracks and sections to the Movie Scene module.
Removed the concept of shared tracks. Any previous uses should port over to shared execution tokens instead.
New: Now automatically create a camera cut track if a camera is dropped and there's no existing camera cut or there's no existing camera cut sections.
New: The sub-Sequence name is now displayed on binding ID pickers rather than sub-section name.
New: Added an active column for Sequence Recorder.
New: Added an option to export an object's transform to a Camera Anim asset.
New: We now allow blending for vector tracks.
New: Added an option to bake transforms to keys.
New: Added configuration for default completion mode for Movie Scene Sequences. The default Level Sequences is Restore State. All others, such as UMG are set to Keep State.
New: We now update auto scroll when moving Keys or Sections.
New: Added the ability to specify a global transform origin for Level Sequences"
It leverages the new concept of specifying an instance data object to the evaluation that allows systems to inject information into a Sequence at runtime, enabling more dynamic control of the track.
Level Sequence Actors use this by default to supply a dynamic "transform origin" to all component transform tracks, which all absolute transform sections will be added.
New: The constraint guid has been switched to constraint binding ID.
New: Playback speed has been added to the settings menu.
New: Added recording button to Sequence Recorder that adds for any selected Actors.
New: Added option to only show selected nodes only.
New: Added option to disable camera cuts on the Movie Scene Player.
New: Added icons in Sequencer for Delete and Rename icons.
New: Added default expansion states to allow track editors to specify them per track type. Material track is currently the only track that defaults to expanded.
Deprecated: We now show binding ID picker customization on all Details panels. This allows creation of new camera cut sections from existing bindings. Add New Camera Cut has been deprecated.
Removed: In Sequencer, the hotkeys Shift-C and Shift-D for toggling the Cinematic viewport have been removed. This was causing some confusion when users were accidently hitting them.
Static Mesh Editor
Crash Fix: Fixed an Editor crash when a distributed Simplygon job fails.
New: Moved the LOD sections dropdown to the LOD picker category. Custom LOD checkboxes are now hidden until "Custom" is checked. The same change applies to the Skeletal Mesh Editor.
New: Added LOD menu to viewport toolbar.
World Browser
Bug Fix: Fixed issues with re-importing of tiled Landscape data in the Word Composition window.
Gameplay Framework
Crash Fix: Fixed a crash in the Get Asset Registry Tags function in Static Mesh that happened when the Find Object function was called during saving.
Crash Fix: Fixed a rare crash when child actor components were loaded as part of an async sublevel load.
Crash Fix: Servers traveling to a world that shares a name with an Asset will log a warning instead of crashing. The full path to the map file can be used to travel successfully in these cases.
Crash Fix: Fixed an initialization crash when using GameplayTags from global singleton objects.
Crash Fix: Removed an extra dereference that was causing a crash in the Gameplay Cue Manager.
Bug Fix: Added tracking of Idle Seconds to the Performance Data Consumer interface's Frame Data.
Bug Fix: Fixed the Load Asset Blueprint node so it can be correctly called from a loop with different inputs.
Bug Fix: Player Camera Manager's Update View Target Internal function will no longer apply values from Blueprint Update Camera if it returns false.
Bug Fix: Fixed an issue where module redirects could cause the Asset Manager to incorrectly allow Primary Assets with the wrong base class.
Bug Fix: Fixed several bugs with async loading streaming levels via Streamable Manager, levels loaded this way now correctly handle offsets and navigation.
Bug Fix: Disabled the duplicate Primary Asset ID ensure for editor-only types like Maps, as it could activate unnecessarily.
Bug Fix: Exporting a Package without a type now adds warnings to the output logs.
Bug Fix: Components and Tick functions will no longer be unregistered and re-registered when testing a rename for an Actor.
Bug Fix: Modified Gameplay Tag's Import Text Item function to account for redirects when establishing Tag Name.
Bug Fix: Fixed character proxies doing up to two floor checks when only rotation is changed (there shouldn't be any). Added some optional verbose logging to the Find Floor and Compute Floor Dist functions.
Bug Fix: String allocations are no longer performed on every Server Move function call that the server runs.
Bug Fix: A bug with Post Significance not firing when unregistering an object unless it was the last object of its tag has been fixed.
Bug Fix: The engine will now only create a Material Instance Dynamic as a child of the calling object if the construction script is running. Otherwise, it will be created in the transient package.
Bug Fix: Fixed clients bypassing network move combining and RPC throttling when there is no acceleration and velocity is non-zero (for instance, when falling or decelerating). Changed velocity condition on whether moves may be combined to a delta from or to zero velocity.
Bug Fix: Fixed "infinite jump" exploits from rogue clients. The server no longer incorrectly resets the Jump Key Hold Time variable while not changing the Was Jumping variable.
Bug Fix: Log Gameplay Tags is now exposed.
Bug Fix: Disabled HLOD volume overlap events, and set to not load on client or server.
Bug Fix: Optimized client movement to avoid updating skeletal mesh bones on the client when teleporting back to combine a network move to the server. This can be avoided since the following Perform Movement call will move the mesh and cause the update there.
Bug Fix: Added stat for Force Position Update (when server ticks client that has timed out) so that it doesn't show up as Perform Movement.
Bug Fix: Redirectors or otherwise invalid .umap files are no longer considered sub levels for world composition.
Bug Fix: All Actor subclasses now correctly enforce not ticking when tick type is Viewports Only, even if they override TickActor. The Should Tick If Viewports Only function can override this by returning true.
Bug Fix: Added several transform related functions to the HIDE_ACTOR_TRANSFORM_FUNCTIONS macro and fixed the signatures on some that were already there.
New: Added a new plugin for performing data validation. Data validation can be run on individual assets, multiple assets or whole directories. It can also be run as a commandlet and included in your build process.
New: Added a function called Return To Main Menu to the Game Instance class. Games may call function to return to their main menu reliably, even in cases where a Player Controller isn't available to call Client Return To Main Menu With Text Reason.
New: Data Tables now support getting the value of each row for a specific column/property, excluding the title of the column.
New: Improved the editor UI for Primary Asset ID properties. It now uses the same one as Pins and better handles objects that aren't already loaded.
New: Added functions to Spring Arm Component to get relevant status of the collision test.
New: Added a To String to the Lex class for physics enums Collision Trace Flag, Physics Type, and Body Collision Response.
New: Added "p.NetForceClientServerMoveLossPercent" cvar to simulate loss of client-to-server movement RPCs (without hosing the rest of networking).
New: The Player Input function Massage Axis Input can now be overridden in subclasses.
New: A new variable, Should Manager Determine Type And Name, has been added to Asset Manager settings. It is disabled by default. If enabled, the Asset Manager will be able to load assets that do not have a native class that implements Get Primary Asset ID, in both the editor and a cooked game. This will behave similarly to Should Editor Guess Type And Name, but in exchange for some runtime performance it will work properly in a cooked build.
New: Primary Data Asset can now be directly blueprinted to enable blueprint-only Primary Assets, even if Should Manager Determine Type And Name is disabled. If you do this, the Primary Asset Type will be the Blueprint below Primary Data Asset, which you can then inherit from to create your Primary Assets.
New: If overrides of Actor Component's Begin Play function are not routed up the hierarchy code, the engine will now use Ensure to warn that initialization steps were missed.
New: Alpha Blend struct is now marked as BlueprintType.
New: Gameplay Abilities now pass along their Source Object when creating the effect context for a new effect.
Consolidated all of the events on Active Gameplay Effect into a single struct.
New: Added an active effect event for when the inhibition of an effect changes, meaning that the effect is no longer relevant, but had not actually been removed.
New: Added functionality to cycle between targets with Page Up and Page Down for 'showdebug' commands. List of targets is contextual (For example 'showdebug animation' will consider all visible actors with an AnimGraph). Current debug Target is highlighted in a green bounding box.
New: Performance optimization: Added Network Skip Proxy Prediction On Net Update to the Character Movement Component. This skips immediate forward prediction for proxies on the frame they receive a network update, avoiding all forward prediction sweeps and floor checks on those updates. Intermediate frames will interpolate with prediction. This can also be disabled globally by setting the CVar "p.NetEnableSkipProxyPredictionOnNetUpdate" to 0.
New: Reduced overhead of evaluating streaming level volumes when all Player Controllers have Is Using Streaming Volumes set to false.
New: Significance Manager's Update function now takes viewpoints in as an Array View instead of a const Array reference.
New: When keys with unknown character code are received by the input system, a new Key struct will now be dynamically created to represent it rather than it being rejected as an invalid key.
New: Significance Managers' Post Significance Update functions can now specify to be executed sequentially on the game thread instead of only as concurrently in the parallel for.
New: Conditional Init Axis Properties is now protected instead of private, so it can be seen by child classes.
New: The Smooth Mouse function is virtual, so games can implement their own smoothing code.
New: Added Rand Point In Circle function. It returns a random point, uniformly distributed, within the specified radius.
New: Updated ensures to give more information when there is an invalid Ground Movement Mode.
New: Added a new trait called Post Script Construct. This enables users to perform initialization after Blueprints have created a temporary object.
New: Hooked up gameplay tag containers to use Post Script Construct to populate parent tags.
Localization
Crash Fix: Fixed a crash if a break iterator outlives ICU during shutdown.
Crash Fix: Fixed a crash if a regex pattern or matcher outlive ICU during shutdown.
Bug Fix: Translation Editor now shows stale translations as "Untranslated".
Bug Fix: Fixed Roboto Light for certain Russian characters.
Bug Fix: Gather Text From Source no longer causes the commandlet to fail.
Bug Fix: Fixed some case-sensitivity issues with Text format argument names/pins. This change restores their original behavior.
Bug Fix: Fixed game preview language not updating from the native text when switching between preview languages.
Bug Fix: Fixed game preview language not updating in realtime while PIE was enabled.
Bug Fix: Fixed struct instances sometimes gathering text that was identical to a parent instance.
Bug Fix: Fixed culture being incorrect when added via the Localization Dashboard.
New: Support for culture correct parsing of decimal numbers has been added.
New: Implemented Always Sign number formatting option.
New: Added an improved Korean font to the editor.
New: Added an improved Japanese font to the editor.
New: Improved the ability to lockdown available game languages. In addition to the previous "DisabledCultures" array, you can now add an array of "EnabledCultures".
New: Added a warning for duplicate package localization IDs when gathering asset localization.
New: Added a way to display and reset the package localization ID via the Content Browser.
Networking
Crash Fix: Fixed a rare crash that could occur on servers where replicated, placed-in-map actors could be destroyed, and clients could send references to these actors in server function parameters.
Crash Fix: Fixed a crash caused by not clearing the send buffer after closing the connection when trying to send data before the handshake is completed.
Crash Fix: Fixed a client-side assert when trying to send RPCs while the connection is pending closure due to queued bunches.
Crash Fix: Fixed several errors and asserts that could occur when using network encryption.
Bug Fix: Removed an ensure that could fire inappropriately in some cases during playback of a replay.
Bug Fix: Made a change to TCP Listener to prevent it from polling while waiting for connections.
Bug Fix: Fixed an issue where the single process Play-In-Editor dedicated server could tick incorrect objects.
Bug Fix: Fixed a replication issue with network startup actors that are renamed on the server.
Crash Fix: Fixed a crash that could occur when replicating objects during a Play-In-Editor game.
Bug Fix: Steam voice now continues to work after seamless travel.
Bug Fix: Changed the adaptive network update frequency option to be disabled by default. This was causing annoying and difficult-to-debug replication artifacts if network update rates weren't managed carefully.
Bug Fix: Fixed a socket send bug that could cause large files to fail to transfer from the cook on the fly server.
Bug Fix: UDP Messaging serialization notification is now thread-safe.
Bug Fix: Fixed a replication checksum issue with Slate brushes.
Bug Fix: Added stats to Net Driver's Tick Flush function and analytic stat-gathering blocks within that function.
Bug Fix: When a client informs the server that it has loaded a streaming level, we now force a net update on all dormant actors that have, at any point, replicated data to relevant clients and ensure that the connection's destroyed startup/dormant actors list is properly populated.
Bug Fix: Fixed an issue that could cause character movement to look jittery in replay playback in some cases.
Bug Fix: Fixed a bug that could cause replicated actors that are destroyed while dormant to never be destroyed on clients.
Bug Fix: Fixed a server crash that could occur if a client sent a reference to an object who's outer object had already been garbage collected.
Bug Fix: GetGameLoginOptions to URL for split screen players was missing, it has now been added for consistency with first player login flow
Bug Fix: Fixed a rare server crash when actors in the process of being destroyed could erroneously be re-added to the network object list.
Bug Fix: Actors are now removed from all net drivers' object lists instead of just the one matching the net driver name stored on the actor.
Bug Fix: Fixed websocket not providing information when the peer closes the connection.
Bug Fix: Fixed a rare server crash that could occur if network encryption was in use.
Bug Fix: Fixed a rare crash due to a null game mode that could occur when joining a server.
New: Added a local file replay streamer that supports playback, recording, compression, and replay events.
New: Added a way to view a dedicated server's network stats on a client in realtime (useful for seeing server stats for cloud hosted servers).
Changed some server stats to be the total for the server instead of the stats for Connection Zero.
New: Added a console command ("net dumpdormancy") to view the dormancy status of Actors. Improved the names displayed when editing Net Dormancy properties.
New: Switched property type comparisons from Strings to Names for speed.
New: Optimized uint replication by removing unnecessary memory management.
New: Optimized updating level visibility state between client and servers. When possible, we now only send a single RPC to communicate which levels are visible or should be visible rather than separate RPCs for each level.
New: Added a new console variable net.Max Connections To Tick Per Server Frame, which can be set to limit the number of client connections a server processes every frame. When many clients are connected, this can help maintain the desired framerate on the server.
New: Added the ability to disable automatic ping replication to all clients in Player State. This can add up to a lot of CPU and bandwidth overhead with many clients connected, since player states are always network relevant. See the Should Update Replicated Ping variable in the Player State class.
New: Added new global network delegates callback Network Cheat Detected for responding when cheating is detected on a server.
New: Analytics have been added for oodle compression percentages.
New: We now use "-multihomehttp" instead of "-multihome" for routing HTTP sockets.
New: Support has been added to Oodle for iOS, Android, and Switch.
New: Network optimizations to reduce memory management churn have been implemented, saving about
10%of net tick time.
New: Improved handling for "pending connection lost" notification, which is triggered only if a connection is lost and there is no "owning actor" to deal with the connection loss.
Added Received Join and Cleaned Up states to connection to make sure that the pending connection lost delegate only fires at the appropriate time.
New: Games now receive a notification when the Player State integer unique ID is available.
New: The Player class can handle beacon connections calling Exec functions.
New: Added the "demo.Loop" console command which will restart playback from the beginning of a replay once the end has been reached.
New: Added some additional cycle counter stats in networking code to provide more insight when viewing stat profiles.
New: CurlHttp now logs send/response basics at normal Log level, providing a better intermediate log level between Warning and Verbose.
New: Added the command-line option "-ExitAfterReplay" to close the client when a replay reaches the end of playback.
New: Added verbose http logging to CurlHttp when a request is canceled.
New: The "net.UseEncryptionToken" console variable has been renamed to "net.AllowEncryption" and made more robust.
Online
Crash Fix: Fixed a crash in web browser module destruction related to the freeing of memory via default destructors after Cef Shutdown has been called.
Bug Fix: Dedicated servers will now respect the voice online subsystem override option. Bug Fix: Fixed a thread safety issue in the user cloud interface for Steam.
Bug Fix: Fixed LoginFlow module not being included in installed builds of the engine.
Bug Fix: Is Server Delegate For OSS can now use the play world during PIE even if no context was passed in.
Bug Fix: Failure to load TCP socket library (possibly due to lacking the protocol file) on Windows is now handled by defaulting to IPPROTO_TCP.
Bug Fix: The Net Connection variable, Player ID is now set up properly on clients and servers for both beacons and game net drivers.
Bug Fix: Improved the way HTTP requests handle timeouts.
New: Included a Rich Presence interface to the Steam Online Subsystem.
New: Created a test harness for testing an online subsystem's rich presence interface.
New: Added embedded browser cookie deletion for Google sign-in when Logout is called.
New: Added the ability to retry HTTP operations by VERB. This enables cloud-save PUT operations to retry automatically.
New: Removed the "Pragma: no-cache" header from libcurl requests.
New: You can now enable libcurl debug callbacks in shipping builds to get more information on libcurl errors.
New: Added the ability to update cacert.pem from Curl's website. ()
Physics
Crash Fix: Fixed a crash that would delete max distance mask from a clothing data entry.
Crash Fix: Fixed a crash when adding empty materials to destructible meshes in the destructible mesh editor.
Crash Fix: Fixed a crash on destroying Physical Animation Component if the scene had gone away.
Crash Fix: No longer crashes when switching between clothing assets while painting with a gradient tool that has cached start/end indices.
Crash Fix: Opening a physics asset that uses a preview mesh with no skeleton no longer causes a crash.
Crash Fix: Fixed a crash that could happen due to outdated rendering data while editing destructible meshes.
Crash Fix: Pasting properties in the Physics Asset Editor no longer crashes.
Crash Fix: Fixed a crash with destructible components cleaning up physics bodies.
Bug Fix: Removed the deprecated Response To Channels property of Body Instance from non-Editor builds to save memory.
Bug Fix: Moved constraint and physical anim profile indicators to the top of the details panel.
Bug Fix: We now update cloth appropriately in the physics asset editor, so physics can interact in real time with cloth.
Bug Fix: Fixed hang when selecting all bodies or constraints in a physics asset.
Bug Fix: Copy/paste of physics properties when the skeleton tree has focus now works as expected.
Bug Fix: Fixed clothing sphere collision detection issue when clothing bounds are flat or almost flat.
Bug Fix: Welded bodies with negative scale now correctly use the mirrored convex hull.
Bug Fix: Fixed generation of adjacency buffers for older clothing data.
Bug Fix: Fixed 'NODEBUG' option for PVD CONNECT console command (PhysX Visual Debugger). It now (correctly) only sends profile data.
Bug Fix: Fixed calling Update Kinematic Bones in Skeletal Mesh Component if there are no physics bodies
New: Added "Anim Drive" to clothing assets and runtime interaction for simulations.
New: Added gravity override support for clothing simulations.
New: Exposed some PhysicsAsset functionality to be more extensible.
New: Improved information in Hit Result in case of initial overlap. The Impact Point member variable will now contain information from PhysX on the deepest overlap point.
New: Added the ability to profile scene query hitches. (See p.SQHitchDetection in code.)
New: Added 'Set Use CCD' function for changing Continuous Collision Detection option at run-time.
Clothing
Crash Fix: Fixed a crash in clothing actor creation when the clothing simulation mesh has no simulated vertices.
Crash Fix: Fixed a crash when setting a Poseable Mesh Component as the Master Pose Component for a Skeletal Mesh Component that was using clothing.
Bug Fix: Fixed non-working tethers in clothing.
Bug Fix: Clothing configuration now applies to active simulation after editing.
Bug Fix: Fixed and re-enabled accurate wind mode.
Bug Fix: Added a clamp to solver frequency to make it impossible to hang the editor with very high clothing solver frequencies.
New: Changed "bCompileNvCloth" compile option to use the same pattern as "bCompileAPEX" (on by default, disabled on some platforms). This allows game projects to disable it.
Collision Detection
New: Improved "Auto Convex" tool for generating simplified convex collision for meshes.
Now uses "Max Hulls" option instead of "Accuracy".
Computation is now done asynchronously.
Now using V-HACD version 2.3.
Physics Asset Editor
Crash Fix: Fixed a crash undoing primitive regeneration when stopping an in-progress simulation.
Crash Fix: Deleting primitives in Physics Asset Editor no longer causes a crash during garbage collection.
Bug Fix: Constraint setup shortcuts and body-body collision are now undoable.
Bug Fix: Right-click now displays the context menu for constraints correctly.
Bug Fix: Fixed dangling lines, poor search focus, and graph not refreshing when making new constraints.
Bug Fix: Added warning to "Use Async Scene" property when shown in the physics asset editor if the project doesn't currently use an async scene.
Bug Fix: Tweaked physics asset editor viewport tooltip.
Bug Fix: Renamed "Create Joints" to "Create Constraints" in Physics Asset Editor.
Bug Fix: Fix that now allows edits to physics sim options to be undone/redone.
Bug Fix: Constraints are now displayed only once in the physics asset editor tree view.
Bug Fix: Fixed not being able to assign/unassign constraint and physical animation profiles from the context menu.
Bug Fix: Prevented undo/redo breaking when moving both a constraint and a body at the same time.
New: Added "Show Only Selected Constraints" to the physics asset editor.
New: Included a keyboard shortcut (Ctrl+T) to toggle between body and constraint selection.
New: Added the ability to rename shapes in the Physics Asset Editor.
New: Added back constraint shortcut to PhAT toolbar.
New: Added error-reporting to the physics profile name widgets.
Platforms
Crash Fix: Updated Intel SPC Tex Comp DLLs to address crashes with some processors on Windows during texture compression.
Bug Fix: Fixed an issue with vsnprintf on platforms that use the custom implementation found in Standard Platform String to ignore length modifier for certain types (floating point and pointer).
Bug Fix: Delete existing Source Config File before allocating a new one to prevent them leaking.
Bug Fix: Fixed compile issues in builds without Low Level Memory enabled.
Bug Fix: Fixed Update Texture 3D to support source pitch correctly.
New: Added missing executable icons for Lightmass and other programs.
New: Added a descriptive error message when using an Invalid iOS Mobile Provision.
All Mobile
Bug Fix: Fixed an issue for functional code checks being removed for shipping builds in media framework.
Bug Fix: Fixed an issue that prevented properly flushable logs from assertions and errors on renderthread for Android and iOS.
New: Disabled Niagara vertex factories for mobile targets.
New: Disabled manual vertex fetch on mobile.
Android
Crash Fix: Fixed crash if quickly clicking on "Configure Now" button in Android Project Settings multiple times.
Crash Fix: Fixed a crash in OpenGL Dynamic RHI with Create OpenGL Texture when launching on Mali Galaxy S III.
Crash Fix: Fixed a crash on application exit of SPRINGBOARD. The process-exit watchdog transgression found several issues if the game is forced closed when the startup movie is playing and the "Wait for movies to complete" is enabled/
Crash Fix: Addressed some crashes related to Experimental Android Virtual Keyboard.
Backspace on an empty line would crash.
Virtual keyboard text field isn't visible after rotating from landscape to portrait.
Chinese and Korean virtual keyboards don't allow native characters.
Number pad reverts back to normal keyboard
Gboard and Swift swipe entry are not supported.
Cursor and UMG do not match.
Text field doesn't appear if there was too much text.
Crash Fix: Samsung Galaxy S8 with Android 7.0 is not hiding suggestions and disabling predictive input. There are cases with this that can cause a crash. Global solution: On text change, downgrade to simple suggestions all the easy correction spans that are not a spell check span (remove android.text.style.SuggestionSpan.FLAG_EASY_CORRECT flags).
Bug Fix: Fixed an issue to properly detect need for rebuilds from the final NDK and SDK levels only in UEBuildSettings.txt.
Bug Fix: Fixed Android icon paths.
Bug Fix: Fixed polling code in Tcp Message Transport that was wasting CPU cycles when using Launch on an Android device.
Bug Fix: Fixed an issue related to Text Watcher On Text Change being called multiple times when typing or deleting the content of a native Android Edit Text control.
Bug Fix: Fixed an issue where Android startup movies not always playing.
Bug Fix: Fixed an issue with values for pinch input producing very different results for the same area on Android devices:
When the pinch goes beyond the viewport boundaries, the touch that goes off-screen is "released" and the zoom effect is over. This is solved by remembering the last pinch event values
The initial distance for the pinch/rotate are "hacked" by touching the screen and moving the finger to another position before using the second finger. This is solved by using the correct values when the pinch event starts.
Bug Fix: Fixed an issue with Android Web Browser 3D. The external texture was moved from Content Browser to the Web Browser plugin.
Bug Fix: Fixed access to Frame Update Info in MediaPlayer14.java and CameraPlayer14.java with Proguard.
Bug Fix: Fixed an issue with flag vertex and fragment shaders belonging to materials with external textures.
Bug Fix: Protected against asking for duration before prepare completed in movie player.
Bug Fix: Fixed support for GL_OES_standard_derivatives on Android.
Bug Fix: Fixed an issue so that the relative path to ResonanceAudio_API.xml is accurate.
Bug Fix: Fixed an issue with external texture shaders for some devices.
Bug Fix: Added missing SecureHash.h include needed for non-unity builds.
Bug Fix: Fixed some issues on Lenovo 939 when rendering a simple scene. The shader compiler complained about code lines before the #extension directectives. Added a placeholder ("// end extensions") in the shader code.
Bug Fix: Fixed DXT apps that failed on first launch when 'For Distribution' enabled with the error message: "Unsupported Texture Format".
Bug Fix: Corrected the code which rounded up the coordinates to the nearest even number.
New: Updated Android SDK license agreement for Gradle.
New: We now use UPL hash to invalidate build if any UPL files are changed.
New: Added a wait for the audio thread to pause audio when going to background.
New: Added a new packaging mode for Android:
ETC1a, which will compress files with alpha channel as ETC2 and files without alpha channel.
At runtime, detection will determine when ETC2 is not supported and will decompress those files.
This format will work on all Android devices while using one single compressed texture format, but devices without ETC2 support will have an impact on performance.
New: Added support for texture streaming to most Android devices with ES 3.1 support.
New: Added verification of support for cooked texture format(s) on device at runtime. (optional with Validate texture formats checkbox in Android project settings and skipped for cook on the fly)
New: Added new Android Project Settings to override the NDK and SDK may now be optionally used per-project. Leave them blank to use the global setting in Android SDK Project Settings.
New: Added LOD Proxy support.
HTML5
Bug Fix: Chrome renders transparent on OSX even though canvas has been set with alpha to false.
Note that if backbuffer is needed with alpha values, this will need to be rewritten.
Bug Fix: Fixed an issue for "Runtime Error: integer result unrepresentable" compiler error removed the following linker options:
-s OUTLINTING_LIMIT flags -s BINARYEN_TRAP_MODE='clamp'
Bug Fix: Disabled Support Screen Percentage to render the full screen otherwise. This was causing HTML5 screens to render only a portion of the screen and in black.
Bug Fix: Fixed an issue with 2D text rendering.
Bug Fix: Fixed an issue so that logs are redirected to the console window instead of "low window".
New: Added HTML5 Launch Helper to check to see if the port is already in use and gracefully exit with a message if already in use. .
New: Set controller axis and button max value in code instead of relying on emscripten Get Gamepad Status.
New: Updated HTML5 readme file with a lot of new and clearer information on files generated, what they are, and how to use the compressed versus uncompressed versions.
New: Resurrected vehicle games and removed "IsContentOnly" check during HTML file generation to allow code projects to work again. This corrected optimization suffix for PhysX 3 Vehicles.
iOS
Bug Fix: Fixed an issue while remote compiling Metal shaders to now check the connection before starting the compile process.
Bug Fix: While Remote Shader Compiling for Metal, the editor no longer spawns and closes a lot of minimized windows.
Bug Fix: Fixed an issue to modify the toolchain to insert the correct SDKRoot.
XCode address sanitizer feature did not work on iOS. The dylib loader depended on the default SDKROOT parameter.
Bug Fix: Fixed local WiFi multiplayer discovery on iOS.
Bug Fix: Fixed large Metal texture memory leak when opening card packs.
Bug Fix: Fixed a issue causing iOS remote building to fail when the remote mac user name contained a space character.
Bug Fix: When building on Mac if no valid provisioning profile and sign in certificate is set up, but the XCode has a correct development wildcard, it will use that wildcard to sign the build.
Bug Fix: tvOS now correctly package or launch-on.
New: Enabled Avf Media plugin for tvOS.
New: Disabled the validation layer in iOS App Store builds for shipping builds.
Linux
Crash Fix: Fixed a crash on a headless system.
Crash Fix: Fixed a crash when printing
%con Linux, Mac, HTML5 etc.
Crash Fix: Made RHI initialization not crash on Mesa drivers.
Crash Fix: Pop-up a message box when unable to launch ShaderCompileWorker instead of crashing.
Crash Fix: Disabled some unstable gdb visualizers that can crash the debugger.
Crash Fix: Fixed an edge-case initialization crash (will print a user-friendly message and exit).
Crash Fix: Fixed the Editor crashing on start if compiled with a recent (5.0+) clang.
Crash Fix: Fix an audio clean up crash when exiting PIE.
Bug Fix: Fixed the Setup.sh handling of special characters in PNG path.
Bug Fix: Fixed UBT to invoke newer (supported) clang versions.
Bug Fix: Fixed libelf breaking build if some third party libs are used.
Bug Fix: Remove unneeded dependency on CEF for base engine.
Bug Fix: Fixed mismatching "noperspective" qualifier.
Bug Fix: Fixed Linux thread/process priorities (requires system settings to allow raising them).
Bug Fix: Fix priority calculation when RLIMIT_NICE is not 0.
Bug Fix: Fixed the Output Device Std Output function to use printf() on Unix platforms. Mixing wprintf/printf is undefined behavior.
Bug Fix: Fixed the unnecessary test for popcnt presence.
Bug Fix: Fixed compilation with glibc 2.26+ (e.g. Ubuntu 17.10).
Bug Fix: Fixed Slate Dialogs to show image icon for *.tga.
Bug Fix: Fixed submenus of context menu not working.
Bug Fix: Fixed RHI settings being wrong when removed via project properties UI.
Bug Fix: Fixed the inability to click on some UI elements at the edges of the screen due to tooltip windows obscuring it.
Bug Fix: Made UBT use response files for the Linux compiler (avoids MAX_PATH issues when cross-compiling).
Bug Fix: Fixed symbol interposition problems with both AMD and NVidia drivers.
Bug Fix: Fixed ARB callback errors when hitting Build Lighting.
Bug Fix: Fix shadow compiler warnings around our includes.
Bug Fix: Do not assert on an unimplemented function, just log the function.
Bug Fix: Set sound to unfocused volume multiplier if not focused.
Bug Fix: Open project in the file explorer when finished creating it if the Null Source Code Access is the default source code editor
Bug Fix: Fixed inability to create C++ projects on Linux despite being able to compile the editor.
New: Updated Linux VHACD.
New: Added LTO support.
New: Enabled DPI on/off switch.
New: Changed BinnedAllocFromOS to place descriptor after allocated chunk instead of in front of it to save some memory.
New: Re-enabled index buffer optimization. It was disabled due to problems found in the live release.
New: Added build-id to all binaries to improve lldb startup times.
New: Vulkan RHI will default to SM5
New: Updated OpenAL to 1.18.1.
New: Use elementary OS versions to set the Ubuntu versions for dependencies.
New: Use Xft.dpi when calculating DPI on Linux.
New: Rebuild libcurl 7.57.
New: Added detection and messaging when building lighting is impossible due to local network interface problems.
New: Add better error handling when using new Toolchain for Linux on Windows.
New: Add Alembic (*.abc) support
New: Added fixes to make porting to gcc easier.
New: Included VSCode in the list of default project generators.
New: Implemented voice capture using SDL.
New: Recompiled the bundled SDL library with Wayland support.
New: UBT will use compiler response files when compiling for Linux.
New: Enabled pooling of memory allocations (only 64KB allocations) on servers.
New: Added LTO support to the Linux cross-builds (requires v11 and newer toolchain).
New: Enabled packaging for Android Vulkan.
New: Added a message about running out of inotify handles a warning instead of error so it doesn't break cooking.
New: Added better error messaging when running out of inotify watches.:
Removed: Removed the support for /-prefixed command line switches.
Mac
Crash Fix: Fixed a rare crash caused by a deferred call to Close Window function executing during Slate's PrepassWindowAndChildren call.
Crash Fix: Fix a crash when using VOIP on Mac.
Crash Fix: Fixed a crash in Avf Media Player related to reusing an Avf Media Player object.
Crash Fix: Fixed a crash on destroying certain menus in an animation persona window.
Bug Fix: Fixed a problem with Xcode project generator not handling quoted preprocessor definitions correctly.
Bug Fix: Fixed a problem with IME not working in context menu text edit fields.
Bug Fix: Disable Zoom button if project requests a resizable window without it.
Bug Fix: Fixed header search paths in XCode project for paths containing space character.
Bug Fix: Fixed an issue with creating an NSString for the Low Level Output Debug String. The string can just be printed out as a UTF-8 string.
Bug Fix: Fixed a C++ std in generated Xcode project to match rest of engine (C++14).
Bug Fix: Fixed an issue with some context menus opened right after closing other context menu disappearing right after being shown.
Bug Fix: Don't activate Mac Crash Reporter Client on launch when it's running in unattended mode.
Bug Fix: Fixed a small number of persistent memory leaks on the Mac build that slowly consume more and more memory as you use the Editor. Interacting with menu's was particularly egregious as each NSMenu would leak after you move away.
Bug Fix: Fixed a problem with Mac editor not exiting with error code returned from Guarded Main function.
Bug Fix: Better support for GPUs with 4GB or more of video memory.
New: Updated Xcode project generator to sort projects in the navigator by name (within folders) and also sort the list of schemes so that their order matches the order of projects in the navigator.
New: Changed XCode project to automatically build ShaderCompileWorker in addition to the selected target for non-program targets.
New: Added initial support for building and running Niagara.
New: Added support for changing monitor's display mode on Mac in fullscreen mode. This greatly improves performance on Retina screens when playing in resolutions lower than native.
Windows
Bug Fix: Switched Windows to use wprintf() consistently.
Bug Fix: Fixed an issue with Windows file save dialog not appending the correct extension when there were multiple options available.
Bug Fix: Fixed an issue with custom window positions in "windowed mode".
Programming
New: Minimized number of fractional digits you can have in a resultant string while using FString::SanitizeFloat, defaulting to 1. Setting this to 0 prevents adding '.0' to integers.
New: Added ‘.idea/misc.xml' file generation to speed up CLion indexing.
New: Added option to generate minimal set of targets for CMake files.
New: Added shader and config files to CMake file generation for searching in IDEs.
New: Added icon size override to FObjectEntryBox.
New: Added CMAKE_CXX_COMPILER and CMAKE_C_COMPILER settings to generated CMake files.
New: Removed deprecated version of RHISetStreamSource.
New: Optimized hlslcc to use hashes instead of lists with hash bucket sizes being prime.
New: When generating CMake files on Mac, added support for iOS and TVOS targets.
New: Merged CLion source code access Plugin support.
New: Reduced calls to malloc by using larger memory blocks for hlslcc Type allocation pools.
New: Optimized hlslcc memory allocation policy on Mac, resulting in
34%faster and
83%cheaper memory moves.
New: Added basic support for hotfixing live assets from .ini files.
Rendering
Crash Fix: Fixed a Metal shader standards iOS crash when users would try to load a project that tries to use an incompatible shader version.
Crash Fix: Disable the METAL_SM5_NOTESS shader platform again to workaround the Nvidia pipeline state compiler crash by changing the buffer address space from "constant" to "device".
Crash Fix: Fixed a crash on D3D12 when locking buffers for read access.
Crash Fix: Resolved an intermittent crash in the MetalRHI by retaining the bound Render Targets until we bind something else.
Crash Fix: Developed a workaround for a crash on fullscreen startup in packaged builds of Showdown for Mac.
CrashFix: Fixed a crash in Metal on one kind of draw call when tessellation is enabled.
Crash Fix: Fixed crash when building Texture Streaming in the resave package commantlet.
Crash Fix: Attempt to reduce the number of crashes on Nvidia Macs by retaining resources used in each command-buffer.
Crash Fix: Fixed a crash in the MetalRHI when using dithered LODs caused by one of the fragment shaders not declaring any explicit outputs.
Crash Fix: Fixed a crash in projects that used Distance Field Shadowing when Static Meshes were updated while not yet registered with the level.
Crash Fix: Doubled the size of the occlusion query buffer in D3D12, to 256KB of memory, to prevent crashes in larger levels.
Crash Fix: Fix for Intel HD 4000 crashing on Mac OS 10.12.6.
CrashFix: Fixed a crash when calling the "FreezeRendering" console command on non Editor builds.
Crash Fix: Fixed crash when scaling a Blueprint with a Planar Reflection component.
Crash Fix: Fixed issue where FOpenGLDynamicRHI RHISetRenderTargetsAndClear function would crash when the render target did not have a color binding.
Bug Fix: Fixed Alembic Geom Cache no longer rendering correctly.
Bug Fix: Fix the Intel vector-array-dereference workaround so that it doesn't cause the AMD compiler to explode instead.
Bug Fix: Fixed the way iOS version specific Metal features are enabled to avoid an overflow bug that meant some weren't being enabled.
Bug Fix: Removed support for Metal Shader Model 4 on Mac.
Bug Fix: Fix compilation on HLSLCC with integral values are not automatically converted into comparisons with zero.
Bug Fix: Fixed the normalization of Morph Tangents.
Bug Fix: Tweaked Metal GPU identification so that it works with eGPU boxes and prototype hardware.
Bug Fix: Implement Byte Address Buffer RWByte Address Buffer in HLSLCC in a similar manner to Structured Buffer RWStructured Buffer so that the backends don't need too much modification.
Bug Fix: Enabled the ability to use GPU morph targets on Mac.
Bug Fix: Fixed RHI thread and parallel execution in Mac -game mode.
Bug Fix: DX11 Texture formats are now supported on Mac Metal.
Bug Fix: Fixed long standing bug in OpenGL Vertex Declaration cache with hash collisions in the cache sometimes returning the wrong declaration.
Bug Fix: Fixed Material events not showing up in RenderDoc.
Bug Fix: Fixed Anisotropic Filtering not working on Vulkan.
Bug Fix: Memory is no longer reallocated on every Vulkan draw call.
Bug Fix: Stopped memory leak from MTL Textures missing autorelease pool blocks in Metal Texture functions.
Bug Fix: Fixed Metal compilation for PCSS shadows.
Bug Fix: Fixed comparison of Scene Color Format.
Bug Fix: Fixed incorrect handling of some float to uint shader loads in Metal.
Bug Fix: Reduce the number of Metal PCH shader compiler errors by using atomic file operations.
Bug Fix: Changed the Metal sampler filter translation to better match Direct3d.
Bug Fix: Shared Texture samplers are now refreshed when changing Max Anisotropy level.
Bug Fix: Fixed a bug where Static Meshes were being added to the scene twice when in Simple Forward rendering was enabled.
Bug Fix: Fixed the native shader libraries again & undo.
Bug Fix: Removed the single linear array for accumulated shader code.
Bug Fix: Uncompress the shader data for the native library system.
Big Fix: Fixed an errant change to the Metal compiler that was trying to force the fully compiled library into the single Metal library.
Bug Fix: Fixed up the internal validation layer in MetalRHI which can now be enabled from the OS command-line.
Bug Fix: Fixed condition controlling which features are enabled when iOS >= 10.3.
Bug Fix: Errors from failed attempts to compile global shaders are now reported correctly. This information enables us to identify the causes of these failures on non-Windows platforms.
Bug Fix: Fixed Texture resolution returning 0 when running Blueprint Construction scripts when cooking.
Bug Fix: We can now handle swizzle's in the HLSLCC FMA identification pass so that we reduce the number of instructions and the platform compiler can't break the instructions up.
Bug Fix: Developed a work around two different bugs with Metal shader compilation behavior that caused black flickering when a material used World Position Offset.
Bug Fix: Metal Shader Format will no longer fallback to text shaders when you ask it to compile to bytecode but the bytecode compiler is not available (either locally or remotely).
Bug Fix: Metal can now render patches as well as triangles when tessellation is enabled.
Bug Fix: Add the ability to emit more details when a pixel shader is found to have no outputs at all.
Bug Fix: Fixed image plates not showing up in high resolution screenshots.
Bug Fix: VIS_Max meta is now set to hidden.
Bug Fix: Fixed Distance Field Ambient Occlusion not working on Metal platforms.
Bug Fix: Changed Metal pipeline bindings warning to Verbose.
Bug Fix: Prevented an unnecessary red<->blue swap in FVulkan Dynamic RHI RHI Read Surface Data function when reading back a surface in VK_FORMAT_R16G16B16A16_SFLOAT format.
Bug Fix: Fixed non-deterministic locations of procedurally placed Landscape grass on different platforms and compilers.
Bug Fix: Fixed huge client hitch when applying changes to rendering scalability settings while in game.
Bug Fix: Stopped MetalRHI leaking vertex-descriptors and debug-group strings.
Bug Fix: applied a fix to runtime error with textures being loaded from bulk data when texture streaming is disabled.
Bug Fix: Removed faulty ensure in the texture streaming code.
Bug Fix: Fix on D3D12 for CPUs with more than 16 cores.
Bug Fix: Added support for Vulkan devices that don't support cached memory type.
Bug Fix: Fixes for UScene Capture Component Hidden Actors always staying gray out in World Editor.
Bug Fix: Fixed Recompute Tangents and Skin Cache on OpenGL 4.
Bug Fix: Fixed OpenGL sometimes overwriting resources if too many were bound on OpenGL 4.
Bug Fix: Fixed Metal support for Manual Vertex Fetch by making it a property of the shader platform.
Bug Fix: Fixed tessellation shader capability initialization in OpenGL renderer.
Bug Fix: Fixed instancing in Vulkan.
Bug Fix: Fixed Skeletal Mesh LODs inside editor.
Bug Fix: Fixed loading shaders from the cooked DDC.
Bug Fix: Planar Reflections now properly handle world origin rebasing.
Bug Fix: Fixed initialization of instancing random vertex stream.
Bug Fix: Using the -opengl command line argument will now default OpenGL to 4.3.
New: Added a recompute Tangents options for cloth assets.
New: Updated Nvidia Ansel to v1.4
New: Added a new console-variable to enable and disable Manual-Vertex-Fetch for Metal. MVF works on macOS, though testing did expose an error with Tessellation on Nvidia (true for MVF enabled & disabled).
New: Refactored the way we compile Buffer and RW Buffer types for Metal so that we can support the type-conversion semantics of HLSL/D3D.
New: Manual Vfetch for Static Meshes. Skeletal Meshes also use the new layout.
New: Added code to D3D11RHI (enabled with "-d3ddebug") to validate Draw Indexed Primitives isn't trying to draw off the end of the instanced vertex streams.
New: Added Get Section From Procedural Mesh utility function for the Procedural Mesh Component.
New: Added support for 2D IES asymmetrical light profiles that can be up to 256x256 in size.
New: Added support for linear HDR EXR output from Sequencer.
New: More Metal debugging commandline options "-metalfastmath" & "-metalnofastmath" to force fast-math on or off for all shaders, must be using runtime-compiled shaders (i.e. -metalshaderdebug or r.Shaders.Optimise=0) to take effect.
New: Added support for Nvidia Aftermath v1.3.
New: New cvar r.ForceStripAdjacencyDataDuringCooking that will strip adjacency data from all Skeletal and Static meshes during cooking.
New: ViewToClipNoAA added to uniform shader parameters. This allows "After Tonemapper" post process blendables that can be positioned in view space when using temporal AA.
New: Added Render Target Pool Stats. Use "stat render target pool" to view.
New: Added support for Speedtree 8.
New: Update to AGS v5.1.1 support.
New: Changed IES texture brightness to be the max candela value and set the texture multiplier to 1.
New: Made major changes on how Vulkan manages its descriptor pools.
New: Support for DX11 quad buffer stereo rendering.
New: Added unified present threshold console variables between all supported platforms.
New: Added RHI Command List Enqueue Lambda method which can be used to enqueue arbitrary tasks on the RHI thread from the render thread without having to write lots of boilerplate FRHI Command functor classes.
FX
Bug Fix: Fixed texture streaming with particle Subuv.
New: Implemented a fast particle pool memory which default is 2MB and automatically cleans up oldest used pool slots.
Lighting
Bug Fix: Fixed Lightmass correctness issues so that, Point, Spot, Directional and Skylights more closely match reference renderer.
Bug Fix: Removed legacy fudge factor on point / spot light photon energy * Spotlights no longer emit based on indirect photon paths.
Bug Fix: Added the ability to enable LPVs for Mac Metal.
Bug Fix: Integrated the LPV_STORE_INDEX_IN_HEAD_BUFFER related changes from Lionhead to make Light Propagation Volumes potentially viable on non-Microsoft platforms.
Bug Fix: Created Stat Map Build Data to track the memory size of lighting and reflection capture build outputs.
Bug Fix: Added support for Reflection Captures to support Lighting Scenarios without the need for recapturing.
Bug Fix: Reflection Captures are now part of the Map Build so modifying a Reflection Capture in Editor will display a preview, but game can only display built captures (black for unbuilt with screen message) *
Bug Fix: Moved Reflection Capture build data to the Build Data package so building lighting / Reflection Captures no longer dirties U Levels.
Bug Fix: Added the ability for Skylights which capture the scene to work correctly with Lighting Scenarios.
Bug Fix: Lighting Scenarios must now be loaded for each time they are made visible no switching back and forth while keeping both loaded.
Bug Fix: Exposed the Emissive Boost, since Lightmass supports Emissive areas on Static Meshes.
Bug Fix: Added the ability for Translucency lighting modes to work with Volumetric Lightmaps to give the lighting much more accuracy.
Bug Fix: Added A Volumetric Lightmap Density Volume which gives local control over Volumetric Lightmap density.
Bug Fix: Volumetric Lightmap no longer refines around static translucent geometry.
Bug Fix: Reworked brick culling by error mechanism which now compares error to interpolated parent lighting instead of the brick average.
Bug Fix: Added Lightmap Type to Primitive Component.
Bug Fix: Added new property, Force Volumetric which allows forcing Static Geometry to use Volumetric Lightmaps.
Bug Fix: Added Force Surface which replaces b Light As If Static Improvements to Volumetric Lightmap quality needed for static geometry.
Bug Fix: Fixed Stationary light shadowing so it is now dilated when inside geometry.
Bug Fix: Added refinement around geometry which uses an expanded cell bounds when the geometry is going to use Volumetric Lightmaps, since cross-resolution stitching causes leaking.
Bug FIx: Added a New World Settings Lightmass property, Volumetric LightmapSphericalHarmonic Smoothing, which controls the global amount of smoothing applied.
Bug Fix: Hidden levels now keep their last build data when using lighting scenarios. Fixed light build bug where hidden levels affected volumetric lightmaps.
Bug Fix: Prevent an assert from firing when LPV Intensity is saved to a value of 0.0.
Bug Fix: Fixed instanced static mesh lightmaps when using lighting scenarios.
New: Added a intensity units property, for Point and Spot lights, that can be set to Candelas, Lumens or Unitless (as before).
New: Added r.Capsule Direct Shadows and r.Capsule Indirect Shadows for more specific scalability control over capsule shadow features.
New: Added a Lighting Feature show flags for RTDF shadows and Capsule Shadows.
New: Capsules Shadows are now disabled on the lowest shadow scalability quality.
New: Added Forward shadowing of Directional light for Translucency Static shadowing and added CSM supported with minimal filtering (1 PCF) Deferred renderer. This affects Translucency using 'Surface Forward Shading' lighting mode while the Forward renderer affects all translucency.
New: Volumetric fog is now always interpolated in the Pixel shader, since per-vertex interpolation gives consistently poor results.
New: Added a checkpoint for Screen Shadow Mask Texture, allowing Vis Screen Shadow Mask Texture.
Materials
Bug Fix: Return error for external texture if not used in pixel shader
Bug Fix: Fixed vertex interpolator node when used with sprite and gpu particle materials.
Bug Fix: Fixed an issue where media textures that were referenced by a material and then later associated with a media player by a Blueprint would fail to render media
Bug Fix: Fixed a bug where renaming a parameter in a material using functions would sometimes reset the value in child instances.
Bug Fix: Fixed texture streaming issue when the camera FOV is reduced by gameplay (like when using a weapon scopes)
New: Added a Channel Mask material parameter, changeable at runtime and returns the selected channel of it's input.
New: Material Layers experimental enable flag has moved to Rendering > Experimental > Support Material Layers.
Mobile Rendering
Bug Fix: Cancelled the applied exposure scale with planar reflections for non-hdr mobile
Bug Fix: Added half float -> float conversion for static vertex buffers when device does not support half float vertex attributes.
Bug Fix: Fixed ES2 issue reading from a render target in float format for BGRA8888 when the glReadPixels() call fails
Bug Fix: Provided an implementation of round() under GLES2 when the shader compiler does not provide it
New: Added GLES3.1 compatibility mode for mobile preview to the windows Open GLRHI.
New: Changed the way Manual Vertex Fetch is enabled on Metal platforms so that it is enabled when targeting Metal v1.2 and higher (macOS 10.12+/iOS 10+).
New: Pass project defined maximum CSM count to mobile CSM shaders.
New: Added CSM distance fading for mobile.
New: Add mobile CSM shader culling for movable directional lights.
Optimizations
Bug Fix: Vulkan descriptor pools are now allocated per PSO instead of globally to reduce peak mem consumption and fragmentation (enabled on Windows only).
Bug Fix: Reduced Vulkan saved pipeline cache file size by filtering duplicated shader SPIRV.
Bug Fix: Fixed Actors with HLOD parent primitives not having synchronized visibility with their parent.
Bug Fix: Improved consistency of HLOD transitions by using the distance to the closest point on the AABB instead of distance to the origin of their bounding sphere.
Bug Fix: Applications using OpenGL now properly discard copy of vertex and index buffers in memory once they are uploaded to GPU.
New: Added new command "stat Streaming Overview" giving high level metrics of Texture usage.
New: Added optional support for compressing saved Vulkan PSO cache with r.Vulkan.Pipeline Cache Compression.
New: Added Shader compiling memory optimizations to reduce memory and time it takes to re-compile Editor shaders.
New: Removed read / write locks from PipelineStateCache, Estimated half time on the main thread for PipelineStatecache operations. Introduces duplicate which are cleaned up at end of frame. Slightly higher memory overhead.
Optimized the texture streaming update loop when using a many small levels in big worlds.
New: Added new CVar "r.DistanceFields.ForceMaxAtlasSize" (defaults to zero, which leaves the current behavior of auto-sizing up to configured maximum, as needed.) This can be used to avoid costly hitches when resizing the distance field in projects with large numbers of actors.
New: Added new "r.HLOD.DistanceOverride" CVar, which can be used to force all HLODs to transition at a specific distance, regardless of the individual cluster's settings. Defaults to zero (off).
New: Added new options to HLOD settings to strip adjacency and distance fields from generated meshes.
New: Added new CVar for adjusting view distance scale, which is not affected by scalability settings
Depreciation: Removed deprecated Lightmap UV Bias, Shadow Map UV Bias from Instanced Static Mesh Component.
Post Processing
Bug Fix: Fix the black ground scattering on Metal by ensuring that input into abs() is non-negative for atmosphere rendering.
New: Added color grading control Blue Correction to correct for artifacts with "electric" blues due to the ACEScg color space.
New: Added color grading control Expand Gamut which expands bright saturated colors outside the sRGB gamut to fake wide gamut rendering.
New: Implemented a SSR input Post Process Material location to compose linear color space backplate after TAA for next frame's SSR.
Tools
Crash Fix: Fixed a crash when attempting to move windows in VR Mode.
Bug Fix: Fixed an issue where the Selection Laser is detached from the controllers in VR Mode.
Bug Fix: ASTC texture versioning is now corrected.
Bug Fix: Fixed an issue reporting some files did not copy to the device while those files were not actually needed.
Bug Fix: Fixed an issue where SSH Key was failing to generate when the remote Mac user contained a space character.
Bug Fix: Fixed memory corruption that could occur when using the Open Level dialog.
New: Added Post Process preview in the Material Editor viewport.
New: Reduced memory requirements of many graph related classes by storing data as FName instead of FString.
New: Added support for Debug Symbols (.udebugsymbols) files generated for iOS builds to be used for memory debugging with the Memory Profiler 2 tool.
AutomationTool
Bug Fix: Fixed parallel executor getting the working directory from task element, not tool node.
Bug Fix: Fixed an issue to no longer produce access violations when creating an installed build of the Engine when running the Automation Tool directly, rather than using the Automation Tool Launcher.
Bug Fix: Fixed one CPU core spinning during compilation using Parallel Executor. This was due to Completed Event never being reset.
Bug Fix: Fixed the ability to distribute app-local dependencies with games packaged using the launcher UE4 distribution.
Bug Fix: App-local dependencies now work for C++ and Blueprint projects.
Bug Fix: Fixed a race condition where stdout/stderr write handles could be inherited by multiple processes, resulting in them not being closed (and the process exit being detected) until all processes that inherited them had been closed. This improves performance of Parallel Executor.
New: Added support for external projects with new parameter to "List Third Party Software". The parameter -ProjectPath specifies the path to the UProject file so that the build rules of the project are parsed and the folder is part of the search for Third Party Software files.
New: Lowered logging level for cached file notification during Cook from Warning to Verbose in order to reduce clutter in the logs.
Removed: Code that deletes cooked data on a failed cook has been removed because deleting the cooked data prevented post-mortem analysis. The Engine now writes out packages transactionally by writing to a temporary file and moving them into place.
Removed: Code that executes pre-built steps from UE4Build have been removed. These build steps should be executed before generating the action graph. So, they have to be done inside of Unreal Build Tool. Running them here causes them to execute twice.
Bug Fix: Fixed an issue causing Unreal Build Tool on Mac to not use makefiles which made the time it needed to determine what needs to be compiled and linked longer than on other platforms.
Bug Fix: Fixed some errors encountered when cleaning a target now cause an error code to be returned.
Bug Fix: Fixed the command line arguments -monolithic and -modular to now be parsed correctly.
Bug Fix: Fixed the include paths ending with a backslash that were escaping the closing quote when passed on the command line.
Bug Fix: Fixed post-build steps for plugins not being executed.
Bug Fix: Output subfolders are no longer being appended twice when generating project files if set by a target. This was causing incorrect command lines when launching from Visual Studio.
Bug Fix: Fixed batch file paths not being quoted correctly when running through XGE.
Bug Fix: Fixed issues where Includes with backslashes were causing an unhandled exception on a background thread in UnrealBuildTool.
Bug Fix: Fixed Version and Module files not being included in the output manifest. This fixes missing version information when using precompiled binaries in UGS.
New: UnrealBuildTool now passes the "/d2cgsummary" flag to the Visual C++ compiler when running with the -Timing argument.
New: Added include and library-related fields to module JSON output.
New: VSWHERE is now used to find the location of MsBuild.exe if available.
New: Improved error handling while generating intellisense project files. It now includes the name of the target being compiled and allows project file generation to continue without it.
New: The Module Rules "Definitions" property has been renamed to "Public Definitions", to make its semantics clearer. A complementary "Private Definitions" property has also been added.
New: UnrealBuildTool now validates that targets don't modify properties that affect other targets. For example, modifying global definitions for an Editor target, where other Editor targets may use the same engine DLLs).
New: Added the build version to the Target Rules and Read Only Target Rules class, allowing it to be read from the target and module rules files.
Changed Launch.build.cs to set private defines for the build changelist in non-formal builds. This allows the version to be set correctly with executables copies to console.
Modular builds on desktop platforms still read the Version file from disk since it's quicker than compiling a value in. They need the build ID to prevent mismatched binaries.
New: Version files are only generated for non-promoted builds, and are included in the target receipt.
New: Added a proper error message if the user attempts to clean a target through hot-reload, rather than just failing to delete DLLs because they are locked.
New: Added a logging option to format output messages in a form that can be parsed by MSBuild. This prevents errors showing as "EXEC: Error:", and displays them correctly in the error list window.
New: Unreal Build Tool will now use the preferred source code accessor selected in the editor to compile with, if an explicit compiler is not specified in the Project Settings.
Deprecated: Unified command line arguments now use the -Name=Value syntax. The following are no longer supported:
-Module
-ModuleWithSuffix
-Plugin
-Receipt
Removed: UDN documentation source files are no longer included in the generated project files by default.
UnrealGameSync
Bug Fix: Fixed an issue so that the project filename is now included in the Editor build command. The project may not be in one of the .uprojectdirs paths.
Bug Fix: Fixed an issue where the "More Info..." button was not using P4 server override.
New: UProject and UPlugin files are now treated as code changes. This change was made since they can invalidate the Editor target.
New: Added an in-place error panel to show when a project fails to open. This enables the user to retry and have their tabs saved instead of creating a modal dialog.
UnrealVS
New: Added Log GenerateProjectFiles to file when using Unreal VS.
UI
Crash Fix: Fixed a line-ending inconsistency when retrieving rich-text that could cause Input Method crashes.
Bug Fix: Fixed some clipping issues that could occur with right-aligned text.
Bug Fix: No longer include trailing whitespace in the justification calculation for soft-wrapped lines.
Bug Fix: Fixed incorrect text selection when selecting via double-click beyond the bounds of the line.
Bug Fix: Special case zero-width space in the text shaper to avoid fonts rendering the fallback glyph.
New: Added support for displaying and editing numbers in a culture-correct way.
Numeric input boxes in Slate will now display and accept numbers using the culture correct decimal separators.
This is enabled by default, and can be disabled by setting Should Use Localized Numeric Input to False in the XEditorSettings.ini (for the editor), or XGameUserSettings.ini (for a game).
New: Added support for a catch-all fallback font within composite fonts.
This allows you to provide broad "font of last resort" behavior on a per-composite font basis, in a way that can also work with different font styles.
New: Added support for per-culture sub-fonts within a composite font.
This allows you to do things like create a Japanese specific Han sub-font to override the Han characters used in a CJK font.
New: Safe Area is no longer centered or symmetrical. The UI will use the Safe Area reported by the OS.
New: Added Editable Text functionality.
Slate
Crash Fix: Fixed a crash when trying to use the widget reflector in a client cooked build.
Bug Fix: Fix for accessing Slate window on render thread.
Bug Fix: Updated Scroll Descendant Into View to account for situations where there are widget(s) between the target widget and the scroll panel, causing the widget geometry position to be incorrect.
Bug Fix: Fix slate material box brushes not being keyed off image size.
Bug Fix: Added conditional to only unregister the active timer if it is still valid after execution.
New: Slate now supports a Custom Boundary Navigation type which allows a custom handler if the boundary is hit.
This provides the ability to have normal navigation while within the boundary and the custom function only on the boundary.
This differs from Custom, which is a full override of the navigation behavior.
UMG
Crash Fix: Fixed a crash on compile due to partial GC of a widget.
Crash Fix: Implemented a fix for a crash when playing in editor after changing, but not compiling, a Widget Blueprint.
Crash Fix: Calling Find All Ancestor Named Slot Host Widgets For Content with a null widget no longer crashes.
Crash Fix: When the widget passed to Scroll Widget Into View is null no longer crashes.
Bug Fix: Fixed an issue where instanced properties on User Widgets might not be properly serialized.
Bug Fix: Fixed paused state so that it doesn't return widgets to their original state.
Bug Fix: UMG objects returned from Create Widget and Create Drag Drop Operation are no longer marked as always being hard referenced by the Blueprint.
This means that they will no longer leak if they are not in use.
If the widget or drag operation are on screen they will correctly stay in memory, but if you need them not to be freed for other reasons, you can assign them to a Blueprint variable.
Bug Fix: Fixed Additive UI materials not being affected by the widget opacity.
Bug Fix: Common Activatable Panel does a better job of cleaning up when it is reset.
Bug Fix: Common Tree View Set Selection correctly updates list navigation and behaves similarly to Common List View Set Selected Item.
Bug Fix: Fixed Common Buttons sending OnClicked events while not interactable.
Bug Fix: UMG Designer now refreshes the widget preview even if the dragged widgets are dropped in an invalid location.
Bug Fix: Implemented a fix for large numbers of unused variables being added to the widget Blueprint when the widget hierarchy was changed.
New: Exposed the virtual keyboard dismiss action to UMG.
New: Added Auto Wrap Blueprint-callable function to Text Block.
New: Now able to set escape keys and no key specified text on Input Key Selector.
VR
Crash Fix: Fixed a crash that could occur when increasing and then decreasing screen percentage in the editor's VR preview.
Crash Fix: Fixed a bug on Gear VR that could cause the app to crash or hang when invoking the system quit menu.
Crash Fix: Fixed a crash that could occur on Oculus when creating a layer surface likely from a bad graphics state.
Crash Fix: Fixed a crash that could occur on Gear VR when using more than four stereo layers.
Bug Fix: Prevented the Front Buffer surface on Gear VR from needlessly updating. This improves performance.
Bug Fix: Fixed a perf regression introduced in 4.17 on Oculus platforms, where operating on the HMD was slowed by a superfluous module check.
Bug Fix: Motion Controller Component cleanup should now be in Begin Destroy and not Destructor.
Bug Fix: Moved View Extension cleanup to Begin Destroy from the Destructor.
Bug Fix: Fixed Google Cardboard rendering upside down on some iOS devices.
Bug Fix: Fixed an issue with the Google VR controller model laser pointer portion wasn't rendering.
Bug Fix: Prevented the editor's camera from locking after playing in Google VR instant preview.
Bug Fix: Set the default value of CVar "vr.HiddenAreaMask" back to one, meaning that it is enabled by default.
Bug Fix: Fixed a bug causing world-locked stereo layer positions to be wrong on SteamVR when there was a camera component representing the HMD.
Bug Fix: Oculus Go touchpad event is now properly handled.
New: VR Screen Percentage Refactor to support new upsample and Dynamic Resolution rendering features.
New: Added new Visualization settings to Motion Controller components.
New: You can now input "bSupportsDash" under the [Oculus.Settings] heading in your engine config files to enable Oculus Dash support.
New: Added a render event name for the Oculus Rift's eye padding draw
New: New code path for deciding whether to add a post processing pass for HMD distortion has been simplified to happen only in a single place.
New: Allow VR plugins to have multiple viewports and rotated eye orientations.
New: Added a new Controller model for the Oculus Go controller.
New: Added several Oculus specific functions that can be accessed through Blueprint.
New: Enabled the debug canvas layer by default for the console command window on Gear VR and Google VR.
XR
Bug Fix: Fixed a bug with Oculus, that caused the tracking origin to be reset on startup.
Bug Fix: Fixed a visual bug that could produce a strobe effect with lighting when using the VR preview in editor.
Bug Fix: Fixed a soft assert that would trigger when exiting the editor's VR preview on Oculus.
Bug Fix: Disable Mobile Multi-View when running with Google cardboard.
New: Added a new Blueprint function that supplies a tracking-to-world space transform.
New: Added an option to Motion Controller components to display an optional model (see the Display Device Model option in the component's details). This can be set up to generically use the device model provided by the backing XR system (if the system has one).
New: Added new Blueprint functions to generically enumerate tracked devices, and another to request a model component from the backing XR system (NOTE: at the moment, not all systems support returning a model).
New: Added a "MotionDelayBuffer" service to the engine that hooks into Motion Controller components, and lets programmers render the components at a delayed rate (to simulate latency).
New: The XR Tracking System and Head Mounted Display interfaces have been updated to make handling of pose updates and late update more explicit.
New: Added a new function to convert poses from XR tracking space to Unreal world space.
New: Changed Motion Controller "Hand" property to a more generic "MotionSource" name field, and added a function to query for generic float parameters associated with the Motion Controller.
New: Added new Blueprint node for getting the current VR pixel density.
New: Refactored the Rift dynamic resolution implementation to use the new general Dynamic Resolution State interface.
Deprecated: XR Tracking System interface function Refresh Poses has been deprecated.
UPGRADE NOTES
Animation
Animation Blueprints
Bug Fix: We changed the order of how to get the material default value. Previous implementation was to get parent's default value first before itself, which was incorrect, so we fixed it by getting own default value if exists. However, if you were relying on the previous behavior, you're going to see the issue. Make sure you set correct default value for the material instance if it was incorrect.
Bug Fix: If this code executes (it will add a log warning if it does) it could cause problems with deterministic cooking / patch creation as new guids will be generated during cook. Nothing will break but patch sizes could be bigger than they need to be. Re-saving the problematic assets will correct the issue.
Skeletal Mesh
The function name of SkeletalMeshComponent::PostBlendPhysics() has changed to FinalizeAnimationUpdate() for clarification.
Blueprints
New: Step Into has been remapped to the F11 key. The Step Over command is now mapped to F10 instead.
User Defined Structure default values now correctly propagate to Blueprints and other user structs the same way that native structs do, which may cause issues if your Blueprints were depending on this not happening. If this is a problem you may need to modify your outer Blueprint or struct and set some values back to empty or 0.
Core
VisitTupleElements has had its parameters swapped - the visitor is now the first parameter, followed by the tuple (now multiple tuples). Any existing calls will need to swap arguments.
FPlatformAtomics::AtomicRead64() has been deprecated - the int64* overload of FPlatformAtomics::AtomicRead() should be used instead.
The bWasReloaded flag was removed from FModuleManager::LoadModule(), FModuleManager::LoadModuleChecked() and FModuleManager::LoadModuleWithFailureReason() - any calls to it should also remove this flag.
TWeakObjectPtr's constructor was silently casting away const - this has now been fixed and any use of TWeakObjectPtr will now warn if it is not const-correct.
UE4 containers now expect move-aware allocators, which means they need to support the MoveToEmpty(SrcAllocator) member function to move all elements across from SrcAllocator, leaving it empty. Any custom allocators should be updated to add this support and specialize the TAllocatorTraits
::SupportsMove trait to be true.
FArchive::Logf() and checkf() macros will now fail to compile if they are not passed a literal formatting string - these should be fixed to use a formatting string like TEXT("%s").
If you have any build scripts that were looking for DevelopmentAssetRegistry.bin or other metadata files you will need to change them to look inside the new Metadata directory.
Code using PLATFORM_COMPILER_HAS_DEFAULTED_FUNCTIONS or PLATFORM_COMPILER_HAS_AUTO_RETURN_TYPES should be replaced with the assumption they are always 1.
ENamedThreads::RenderThread and ENamedThreads::RenderThread_Local are no longer publicly accessible - they should be replaced with ENamedThreads::GetRenderThread() and ENamedThreads::GetRenderThread_Local() respectively.
TVariant::GetType() now returns an EVariantTypes enum class instead of an int32. Any use of this return value as an int32 will need to be changed to EVariantTypes.
Bug Fix: PPF_DeltaComparison has been removed - please remove any use of it from your projects.
Bug Fix: Calls to Align(), AlignDown(), AlignArbitrary() and IsAligned() will no longer compile if passed a non-integer/pointer type. This often fires when trying to align an enum value - it should be explicitly cast to an integer type of the right size.
Bug Fix: Calling FReferenceCollector::AddReferencedObjects() with a container will no longer compile if the container has pointers to forward declared types. You should ensure the types are included before calling this function.
Editor
The debug canvas for stats is always dpi scaled in editor and pie. Any existing tools that directly use viewport positions and size as positions for canvas elements will need to divide out dpi scale using 1/CanvasObject->GetDPIScale().
Bug Fix: Note: This only applies to perforce users NOT using ticket based perforce servers.
The perforce source control login flow will no longer make any calls to the perforce api which results in a password being stored on the local users machine by perforce. We have determined that the security level of this feature is not high enough to continue using it. Users can now expect to have to type in their perforce password each time they open the editor. To work around this users can call p4 passwd directly from a command window to save a password on their machine if they so choose.
Sequencer
'Shared' templates are no longer supported in sequencer. Any previous usage should be easily converted to a shared execution token instead (FMovieSceneExecutionTokens::AddShared()). Such tokens are shared globally, rather than per sub-sequence which makes them more robust for shared responsibilities.
Gameplay Framework
Bug Fix: AActor subclasses that intended to be tick when the tick type is Viewports Only and did not set ShouldTickIfViewportsOnly to return true, but instead overrode TickActor and did not filter on Viewports Only will now need to correctly implement the ShouldTickIfViewportsOnly virtual to achieve this behavior.
Networking
UNetConnection::ClientWorldPackageName is now a private member and should accessed through the available Get/Set methods.
Bug Fix: The net.UseAdaptiveNetUpdateFrequency console variable is now disabled by default, since it was causing replication issues in many projects. If you want to keep using this feature, you may enable it in your project's configuration files.
Rendering
We changed the Vertexbuffer layout from AoS to SoA because after this change we can bind the Input Attributes via SRV and load them in the shader directly (also from compute). Before this change it was very difficult and involved manual interpretation of the Raw Data in the Shader. We recommend everyone to follow this route and change their Vertexbuffers as we probably will deprecate mixed AoS Vertexbuffers in the Future.
Tube shaped point lights will be automatically rotated to compensate for this change. Dynamically generated point lights with a source length will need to have a 90 degrees pitch added to their orientation.
Inside Sequencer->Render->CompositionGraphOptions Set Output Format to "Custom Render Passes" Set Include Render Passes to "Final Image" Set Capture Frames in HDR "On" Set Capture Gamut to "Linear"
UI
UMG
Bug Fix: If a game was depending on this functionality to keep things loaded, change your Blueprint to set the return value into a variable, that will keep it hard referenced normally
XR
Deprecated the MotionController component's EControllerHand 'Hand' property and replaced it with a 'MotionSource' name. To support the same functionality, use names corresponding to the old enum values ("Left", "Right", etc.).
vr.oculus.PixelDensity.adaptive has been removed. Dynamic resolution needs to be enabled using the general engine system now.
PROGRAMMING UPGRADE NOTES
AI
Bug Fix: Fixed a bug in UAISense_Sight::UnregisterSource that resulted in sources being removed not getting removed from the ObservedTargets collection.
Bug Fix: Now when meshes are set to EMeshComponentUpdateFlag::AlwaysTickPose, we optionally kick of a task to perform parallel update only (no evaluation). Previously we needed to run this on the game thread as if we skipped evaluation we did not run a worker task.
Audio
New: The new Microphone Capture Component enables feeding audio directly into a game for use with the Audio Engine and for driving in-game parameters. Limitations:
Only implemented on PC.
Only for local client microphone capture, not connected to VoIP system.
Doesn't support recording microphone output to assets.
Automation
Added automated test per-platform blacklist, defined in the project DefaultEngine.ini or platform equivalent e.g. MacEngine.ini:
[AutomationTestBlacklist] ;Use format +BlacklistTest=(Map=/Game/Tests/MapName, Test=TestName) +BlacklistTest=(Map=/Game/Tests/Physics/Stability, Test=StackCubes)
Automated test screenshots are now able to test against a hierarchical fallback platform and RHI. This is defined in DefaultEngine.ini, for example:
[AutomationTestFallbackHierarchy] ;Use format +FallbackPlatform=(Child=/Platform/RHI, Parent=/Platform/RHI) +FallbackPlatform=(Child=/Windows/OpenGL_SM4, Parent=/Windows/D3D11_SM5) +FallbackPlatform=(Child=/Mac/SM5, Parent=/Windows/D3D11_SM5)
Core
Any usage of FStartupPackages in project code should be removed
"Native" (now called "FNativeFuncPtr") is now a function pointer that takes a UObject* context, rather than a UObject member function pointer. Use "P_THIS" if you were previously using the "this" pointer in your native function.
FTickableBaseObject::GetTickableType can be overriden and returns an enum value of either Conditional (default), Always, or Never.
Tickable Never objects will not get added to the tickable array or ever evaluated.
Tickable Always objects do not call IsTickable and assume it will return true.
Tickable Conditional objects work as in the past with IsTickable called each frame to make the determination whether to call Tick or not.
FTickableObjectBase::IsTickable() is no longer a pure virtual.
To match the change in functionality EResourceSizeMode::Inclusive was changed to EstimatedTotal. If any of your code was using Inclusive, you need to decide whether you want to change to EstimatedTotal or Exclusive.
The ResourceSize asset registry tag has been removed as it was inaccurate. To get an accurate estimate, you must load the object and call GetResourceSize().
Bug Fix: Cleaned up delegate definitions in UObjectGlobals.h and moved several to be WITH_EDITOR. If you were using those definitions in a game you will need to wrap your use in WITH_EDITOR.
Editor
Being an early feature, some Editor UI buttons do not work during playback of demos.
The debug canvas for stats is always dpi scaled in editor and pie. Any existing tools that directly use viewport positions and size as positions for canvas elements will need to divide out DPI scale using 1/CanvasObject->GetDPIScale().
RenameAssetsWithDialog() is often recommended in places where RenameAssets() was being used.
The SizeMap and ReferenceViewer have been merged into the AssetManagerEditor plugin so they can share the data needed to show cooked sizes.
Gameplay Framework
C++ Header cleanups for Actor, Pawn, Character, Controller, and PlayerController. Marked several fields as private that were accidentally exposed in 4.17.
Projects using this plugin are strongly encouraged to add a project specific subclass of UDataValidationManager. Any objects that require validation should override IsDataValid.
Overriden versions of USignificanceManager::Update will need to change signature to receive a TArrayView instead of const TArray&.
Networking
Bug Fix: Programmers using HasPendingConnection() on a listening FScocket should consider using the new WaitForPendingConnection() function instead, which prevents wasting CPU cycles due to polling. This change supported a change to the behavior of the FTcpListener constructor's InSleepTime parameter which now specifies the maximum time to wait for a connection in its polling loop. It now defaults to 1 second, but using a value of 0 will restore the previous polling behavior.
Online
This Online Presence Test requires that the testing user to have friends and be connected to the platform. Usage is "online test presence [OPTIONAL ID]". The ID passed should be an arbitrary non-friend. If no ID is passed, the arbitrary lookup test is skipped. This test does wait for presence updates for about two minutes, after this time if no updates are received, the harness will mark the test as a failure.
Rendering
The STRONG_TYPE #define now seen in UE4 .usf/HLSL is actually hlslcc's "invariant" keyword applied as a type-qualifier to a Buffer<>/RWBuffer<> type - only valid when using Metal which exports this through ILanguageSpec and #define'd out for everyone else. When this is applied the MetalBackend will emit a "raw" buffer whose type must match the shader exactly, which avoids the overheads of Linear-Textures & Function-Constants.
Vertexbuffer Objects now bind their SRVs directly instead of filling out a StreamDescriptor. Care must be taken when the Vertexfactory is initialized before the VertexBuffer as the bound SRV will become invalid if the underlying vertexbuffer is re-created. We do not support late binding of the data anymore.
Mobile Rendering
Removed cvar "r.AllReceiveDynamicCSM" and primitive flag "FPrimitiveSceneProxy.bReceiveCombinedCSMAndStaticShadowsFromStationaryLights". Added "FPrimitiveSceneProxy.bReceiveMobileCSMShadows", which now applies to both stationary and movable lights and defaults to on.
Tools
UEdGraphPin::PinCategory, UEdGraphPin::PinSubcategory, and UEdGraphPin::PinName are now stored as FName instead of FString. UEdGraphPin::CreatePin has several simplified overrides so you can only specify Subcategory or SubcategoryObject or neither. UEdGraphPin::CreatePin also takes a parameter bundle for reference, const, container type, index, and value terminal type rather than a long list of default parameters. Material Expressions now store input and output names as FName instead of FString. FNiagaraParameterHandle now stores the parameter handle, namespace, and name as FName instead of FString. Most existing pin related functions using string have been deprecated.
UI
Slate
Renamed FTextBlockLayout to FSlateTextBlockLayout to reflect that it's a Slate specific type
XR
If you maintain a VR or AR plugin, you will have to remove your RefreshPoses() override and move any late update functionality to OnBeginRendering_RenderThread/GameThread(). Alternatively, if your plugin does not support late update, you can remove RefreshPoses() completely and override DoesSupportLateUpdate(), returning false.
Added two methods to the IXRTrackingSystem API - GetTrackingToWorldTransform() and UpdateTrackingToWorldTransform(). GetTrackingToWorldTransform() is expected to return an transform that will convert poses from GetCurrentPose() into world space. UpdateTrackingToWorldTransform() is meant for users to override the transform since the anchor between the two spaces is controlled by the project.
MotionControllerComponents' 'Hand' property is no longer available. Instead, switch to using the new 'MotionSource' field with text names matching the old enum values ("Left", "Right", etc.). For ease of use, "Left" and "Right" are predefined in FXRMotionControllerBase (as LeftHandSourceId & RightHandSourceId).
Replaced EControllerHand parameters in the IMotionController API with generic "source" name parameters. To handle backwards compatibility, use name values corresponding to the old enum ("Left", "Right", etc.).
Added two new functions to the IMotionController API - EnumerateSources() & GetCustomParameterValue(). Use EnumerateSources() to return a list of the supported devices (replaces the static EControllerHand enum). GetCustomParameterValue() can be used to expose float values associated with the motion device (such as FOV with cameras, etc.).. | https://docs.unrealengine.com/4.27/zh-CN/WhatsNew/Builds/ReleaseNotes/4_19/ | 2021-11-27T05:56:49 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/419Banner.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_1.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_2.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_3.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_4.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_5.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_7.gif',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_8.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_12.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_13.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_14.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_15.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_16.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_17.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_18.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_19.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_21.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_23.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_26.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_30.gif',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_32.jpg',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_33.jpg',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_34.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_35.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_36.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_38.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_39.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_40.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_41.jpg',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_42.png',
'image alt text'], dtype=object)
array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_19/image_43.gif',
'image alt text'], dtype=object) ] | docs.unrealengine.com |
More info about the calculation of our metrics
Documentation for GitHub, GitLab, Azure DevOps, Bitbucket, and Jira integrations
Learn how to invite users, manage user permissions, roles, and SSO settings
More info on how to set up the Resource Planning and Project Costs reports
Step-by-step guide on how to configure Waydev Enterprise
More info about how to use Waydev
More info about our security
More info about the billing | https://docs.waydev.co/en/ | 2021-11-27T04:52:36 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.waydev.co |
IMPORTANT: Before connecting CircleCI, please make sure you have a valid GitHub or Bitbucket integration.
Personal Access Token connection.
Step 1: In order to connect CircleCI, navigate to Integrations, in the Project section.
Step 2: Select CircleCI.
Step 3: Create a Personal Access Token in CircleCI.
Step 4: Paste the Personal Access Token in the field, and click Test Connection. Then, click Connect.
Step 5: Navigate to CI/CD Repositories, under the Project section.
Step 6: Select the repositories you want to connect, then click Save Changes. If you create new repositories in CircleCI, you need to click the Refresh Repositories button to add them to the CI/CD Repositories page. | https://docs.waydev.co/en/articles/5408883-circleci-integration | 2021-11-27T05:58:51 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://downloads.intercomcdn.com/i/o/362852235/e876489ef8df07634f57f390/image+%282%29.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/362856449/c84ecebc2f7c75c8c1085590/image+%281%29.png',
None], dtype=object) ] | docs.waydev.co |
============================== The syndication feed framework ============================== .. module:: django.contrib.syndication :synopsis: A framework for generating syndication feeds, in RSS and Atom, quite easily. Django comes with a high-level syndication-feed-generating framework for creating RSS_ and :rfc:`Atom <4287>` feeds. To create any syndication feed, all you have to do is write a short Python class. You can create as many feeds as you want. Django also comes with a lower-level feed-generating API. Use this if you want to generate feeds outside of a web context, or in some other lower-level way. .. _RSS: The high-level framework ======================== Overview -------- The high-level feed-generating framework is supplied by the :class:`~django.contrib.syndication.views.Feed` class. To create a feed, write a :class:`~django.contrib.syndication.views.Feed` class and point to an instance of it in your :doc:`URLconf `. ``Feed`` classes ---------------- A :class:`~django.contrib.syndication.views.Feed` class is a Python class that represents a syndication feed. A feed can be simple (e.g., a "site news" feed, or a basic feed displaying the latest entries of a blog) or more complex (e.g., a feed displaying all the blog entries in a particular category, where the category is variable). Feed classes subclass :class:`django.contrib.syndication.views.Feed`. They can live anywhere in your codebase. Instances of :class:`~django.contrib.syndication.views.Feed` classes are views which can be used in your :doc:`URLconf `. A simple example ---------------- This simple example, taken from a hypothetical police beat news site describes a feed of the latest five news items:: from django.contrib.syndication.views import Feed from django.urls import reverse from policebeat.models import NewsItem class LatestEntriesFeed(Feed): title = "Police beat site news" link = "/sitenews/" description = "Updates on changes and additions to police beat central." def items(self): return NewsItem.objects.order_by('-pub_date')[:5] def item_title(self, item): return item.title def item_description(self, item): return item.description # item_link is only needed if NewsItem has no get_absolute_url method. def item_link(self, item): return reverse('news-item', args=[item.pk]) To connect a URL to this feed, put an instance of the Feed object in your :doc:`URLconf `. For example:: from django.urls import path from myproject.feeds import LatestEntriesFeed urlpatterns = [ # ... path('latest/feed/', LatestEntriesFeed()), # ... ] Note: * The Feed class subclasses :class:`django.contrib.syndication.views.Feed`. * ``title``, ``link`` and ``description`` correspond to the standard RSS ``
Today I had a Vienna Beef hot dog. It was pink, plump and perfect.") >>> print(f.writeString('UTF-8')) | https://django.readthedocs.io/en/latest/_sources/ref/contrib/syndication.txt | 2021-11-27T05:05:34 | CC-MAIN-2021-49 | 1637964358118.13 | [] | django.readthedocs.io |
User API keys¶
To interact with secured endpoints of the Domino API, you must send an API authentication key along with your request. These keys identify you as a specific Domino user, and allow Domino to check for authorization.
To find your API key, first open your Account Settings.
In the setting menu, click API Key. You will see a panel that displays your key. Refer to the API documentation for information about how to use this key.
Be advised that any person bearing this key can authenticate to the Domino application as the user it represents and take any actions that user is authorized to perform. Treat it like a sensitive password. | https://docs.dominodatalab.com/en/4.5.2/reference/users/User_API_keys.html | 2021-11-27T05:20:19 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../../_images/Screen_Shot_2019-02-20_at_11.16.11_AM.png',
'Screen_Shot_2019-02-20_at_11.16.11_AM.png'], dtype=object)
array(['../../_images/Screen_Shot_2019-02-20_at_11.16.40_AM.png',
'Screen_Shot_2019-02-20_at_11.16.40_AM.png'], dtype=object)] | docs.dominodatalab.com |
Uploading files to Domino using your browser¶
When you want to move an existing project into Domino, you have a few options. For smaller projects - less than 550 mb - you can drag and drop files into Domino.
If your project is larger than 550 mb, check out this article on how to use our command line interface or a Domino run to import larger data sets and projects. | https://docs.dominodatalab.com/en/4.6.2/reference/projects/Uploading_files_to_Domino_using_your_browser.html | 2021-11-27T04:52:50 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.dominodatalab.com |
Daylight saving time support for active batch periods
Microsoft Dynamics 365 Finance version 10.0.12 includes a Daylight Saving Time support for batch job active periods feature that can be turned on in Feature management. This feature introduces daylight saving time (DST) support for the active periods for batch jobs and lets users associate their active periods with different time zones.
Note
This feature is a one-way feature. In other words, it can't be turned off after it's turned on.
When this feature is turned on, the following changes occur:
On the Active periods for batch jobs page, a Timezone field is added for each active period. This field specifies the time zone that the active period uses. By default, every active period initially uses the Coordinated Universal Time (UTC) time zone.
The start and end times of existing active periods are adjusted according to the UTC time zone. 'Although the active periods will continue to start and end at the same times that they previously started and ended, the times that are shown might change if the user's preferred time zone isn't UTC.
Active periods will follow the DST adjustments of the time zones that they are associated with. | https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/sysadmin/batch-active-period-dst | 2021-11-27T05:50:02 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
In case of sequential signing, the next recipient in line to sign the document is reminded. In case of parallel signing, all signers who are yet to sign the document will be reminded.
The API sends emails and a callback event (rs.remind) with data to a predefined URL endpoint configured.
In case of requests enabled using the embedded_signing option only the callback event is trigerred. | https://docs.signeasy.com/v2.0/reference/cancel-request-signature-without-markers | 2021-11-27T04:51:38 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.signeasy.com |
(defined by
MemoryResourceTraits::resource_type),. We also have resources that represent global GPU memory (“DEVICE”), constant GPU memory (“DEVICE_CONST”), unified memory that can be accessed by the CPU or GPU (“UM”), host memory that can be accessed by the GPU (“PINNED”), and mmapped file memory (“FILE”). If an incorrect name is used or if the allocator was not set up correctly, the “UNKNOWN” resource name is returned.
Umpire will create an
umpire::Allocator for each of these resources,
and you can get them using the same
umpire::ResourceManager::getAllocator() call you saw in the previous
example:
umpire::Allocator allocator = rm.getAllocator(resource);
Note that since every allocator supports the same calls, no matter which resource it is for, this means we can run the same code for all the resources available in the system.
While using Umpire memory resources, it may be useful to query the memory
resource currently associated with a particular allocator. For example, if we wanted
to double check that our allocator is using the
device resource, we can
assert that
MemoryResourceTraits::resource_type::device is equal
to the return value of
allocator.getAllocationStrategy()->getTraits().resource.
The test code provided in
memory_resource_traits_tests.cpp shows a complete
example of how to query this information.
Note
In order to test some memory resources, you may need to configure your Umpire
build to use a particular platform (a member of the
umpire::Allocator, defined by
Platform.hpp) that has access to that resource. See the Developer’s
Guide for more information.
Next, we will see an example of how to move data between resources using operations.
////////////////////////////////////////////////////////////////////////////// // Copyright (c) 2016-21, Lawrence Livermore National Security, LLC and Umpire // project contributors. See the COPYRIGHT file for details. // // SPDX-License-Identifier: (MIT) ////////////////////////////////////////////////////////////////////////////// #include "umpire/Allocator.hpp" #include "umpire/ResourceManager.hpp" void allocate_and_deallocate(const std::string& resource) { auto& rm = umpire::ResourceManager::getInstance(); // _sphinx_tag_tut_get_allocator_start umpire::Allocator allocator = rm.getAllocator(resource); // _sphinx_tag_tut_get_allocator_end constexpr std::size_t SIZE = 1024;_DEVICE) allocate_and_deallocate("DEVICE"); #endif #if defined(UMPIRE_ENABLE_UM) allocate_and_deallocate("UM"); #endif #if defined(UMPIRE_ENABLE_PINNED) allocate_and_deallocate("PINNED"); #endif return 0; } | https://umpire.readthedocs.io/en/v6.0.0/tutorial/resources.html | 2021-11-27T06:15:32 | CC-MAIN-2021-49 | 1637964358118.13 | [] | umpire.readthedocs.io |
Understanding the forecasting engine
The forecasting engine refers to the specific component within Lokad that is responsible for core predictive analysis. This engine is an advanced piece of software, which can be aptly categorized as machine learning. Moreover, this engine is very different from the classic forecasting toolkits typically geared towards time-series forecasting. In this article, we explain what this engine does, how it differs from classic toolkits, and why these differences matter when it comes to delivering sound results from a supply chain perspective.
Integrated forecasts vs periodic forecasts
The classic time-series viewpoint emphasizes daily, weekly or monthly periodic forecasts. In this case, historical data is represented as periodic time-series, and forecasts are expected to be represented as periodic time-series as well. While this viewpoint might be appropriate for certain domains, such as energy consumption forecasting or highway traffic forecasting, we have found that this approach performs poorly whenever supply chain optimization is concerned.
Periodic forecasts don’t match the underlying reality of supply chain management: if the lead time to be considered is 10 days, should we sum the daily forecasts up to 10 days ahead? Should we sum 1 weekly forecast plus 3 daily forecasts? As a rule of thumb, summing forecasts is a big statistical no-no: the forecast of the sum is not the same as the sum of the forecasts. Therefore, if the lead time is equal to 10 days, then, it’s the responsibility of the forecasting tool to produce a 10-day-ahead forecast, and not a set of intermediate forecasts to be recomposed into the desired forecast.
As a result, Lokad’s forecasting engine is not restricted to the calculation of daily, weekly or monthly forecasts. As a superior alternative to those forecasts, it can directly compute demand forecasts that match the expected lead time. Moreover, the lead time is expected to be a probabilistic forecast of its own. At first, this behavior might appear a bit disconcerting, but unfortunately, the idea that arbitrary forecasts can be accurately recomposed through sums of periodic forecasts is wishful thinking in nearly all supply chain situations.
Probabilistic forecasts vs mean forecasts
Classic forecasting tools emphasize mean forecasts, or sometimes, median forecasts. Yet, when it comes to supply chain optimization, business costs are concentrated at the extremes. It’s when the demand is unexpectedly high that stock-outs happen. Similarly, it’s when the demand is unexpectedly low that dead inventory is generated. When the demand is roughly aligned with the forecasts, inventory levels fluctuate a bit, but overall, the supply chain remains mostly frictionless. By using “average” forecasts - mean or median - classic tools suffer from the streetlight effect, and no matter how good the underlying statistical analysis, it’s not the correct question that is being answered in the first place.
In contrast, Lokad’s forecasting engine delivers probabilistic forecasts, whereby the probabilities associated with every possible future (1) are estimated. These probabilities provide much more granular insights into the future compared to single-valued predictions, such as mean or median forecasts. Through probabilistic forecasts it becomes possible to fine-tune supply chain risks and opportunities. More precisely, for each possible supply chain decision - such as buying 1 extra unit of stock for a given SKU - it is possible, for a given future demand level, to compute a back-of-the-envelope financial outcome (2); and thus, ultimately prioritize all possible future decisions (3) based only on a “weighted” financial outcome by combining financial estimations with probabilities.
(1) However, in practice, to keep the calculations feasible, probabilities that are deemed too low are rounded to zero. Therefore, only a finite set of configurations are actually investigated.
(2) For example, let’s imagine that we buy 1 unit now, and the future demand is 5 (meaning that we will sell 5 units), and have a stock of 2 at the end of the period. If we assume that the gross margin is 10 and the carrying cost per period for each unit unsold is 15, then the net profit is 5 x 10 – 2 x 15 = 20.
(3) Prioritizing all future decisions might sound like a dreadfully complex task, but Lokad’s technology happens to be dedicated to this very problem. This technology is distinct from the forecasting engine itself.
Toy datasets and synthetic data patterns
Practitioners who have already used a forecasting toolkit in the past might be tempted to test Lokad against a time-series of a few dozen data points, just to see how it goes. Furthermore, one might also be tempted to check out how Lokad reacts to simple patterns, such as a linear trend, or a perfectly seasonal time-series. While it may come as a surprise to practitioners, our forecasting engine actually performs poorly in such situations. Unfortunately, this “poor” behavior is necessary in order to obtain satisfying real-life results.
One major specification of our forecasting engine is to deliver accurate forecasts while requiring zero manual statistical fine-tuning. In order to achieve this, Lokad has pre-tuned the forecasting engine using hundreds of real-life well-qualified datasets. Synthetic data patterns, such as a perfectly linear trend, or a perfectly cyclical time-series, are very unlike the patterns we found in real-life datasets. Consequently, such synthetic data patterns, albeit being seemingly straightforward, are not even considered. In fact, what is the point of delivering “good” results on “fake” data, if it comes at the expense of delivering “good” results on “real” data? Our experience indicates that a good forecasting engine is very conservative in its choice of forecasting models. Introducing a “perfect fit” linear regression model just for the sake of succeeding on toy datasets is downright harmful at scale, because this model may be used where it should not have been.
Furthermore, in order to entirely skip all the manual fine-tuning , our forecasting engine relies heavily on machine learning algorithms. As a result, the engine needs a sizeable dataset to work. In fact, the forecasting engine establishes its own settings by performing many statistical tests over the datasets. However, if the overall dataset is too small, the forecasting engine won’t even be able to properly initialize itself. As a rule of thumb, at least 1000 historical data points spread over 100 SKUs are needed to start getting meaningful results. In practice, even small e-commerce businesses, making less than 1 million € in turnover per year, tend to accumulate over 10,000 data points, typically spread over more than 200 SKUs. Therefore, the forecasting engine’s data requirements are actually very low, but not so low that the forecasting engine can be expected to work on “toy” datasets.
Forecasting engine’s data requirements
Here is a short checklist for the forecasting engine’s minimum data requirements:
- items: min 100, better 500, best 2000+
- item attributes: min 1, better 3, best 5+
- past sales orders: min 1000, better 10000, best 50,000+
- past purchase orders: min 50, better 500, best 2000+
- history depth in months: min 3, better 18, best 36+
The “items” refer to the SKUs, products, part numbers, barcodes, or whatever element needs to be forecast depending on the specific business situation at hand.
Purchase orders are required for forecasting the lead times. While they may be omitted at the initial stage, incorrect lead time estimations may wreak havoc on your supply chain, and hence, we strongly recommend providing this data whenever possible.
Input data can be further enriched by providing a history of stock-outs and promotions. In reality, the forecasting engine supports even more advanced data scenarios. The present section only focuses on the “must-have” data.
Macro forecasts and high-frequency forecasts
Lokad’s forecasting engine is geared towards supply chain situations in commerce and manufacturing. These situations are characterized by their numerous SKUs and their erratic demand. It is in such situations that our forecasting engine shines the most. On the other hand, there are other types of forecasts that don’t fit our technology.
Macro-forecasts, which involve forecasting highly aggregated time-series, typically very smooth - once cyclicities are taken care of - and very long, are not Lokad’s forte. Such scenarios are witnessed in energy consumption forecasts, highway traffic forecasts, cash flow forecasts, incoming call traffic forecasts, and so on. Our forecasting engine is geared towards many-items forecasts, where correlations between items can be leveraged extensively.
High-frequency forecasts, which involve time-series with intra-day granularities, are also not handled by Lokad. These situations include most financial forecasts, including commodity and stock exchange forecasts. Here, the challenge lies in the fact that statistical patterns are already exploited - by other people - and have an impact on the future itself. This case is very different from supply chain, where demand forecasts have only a moderate impact, (4) on the observed future demand.
(4) It would be incorrect to say that demand forecasts have no impact on the demand at all. For example, forecasting more demand for a product in a given store is likely to increase the facing of this product, and hence boost the demand. However, such patterns tend to be marginal in supply chain. | https://docs.lokad.com/demand-forecasting/understanding-the-forecasting-engine/ | 2019-01-16T05:45:48 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.lokad.com |
Magento for B2B Commerce, 2.2.x
Connect to Facebook
Magento Commerce has removed the "Magento Social" Facebook integration, and no longer supports the extension.
Magento Social is an integration that establishes a connection between your store and your Facebook account, and creates a shopping page with products from your catalog. When shoppers click a product on your Facebook page, they are redirected to the corresponding product page in your Magento store. All transactions take place from your Magento store.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/marketing/social-connect-to-facebook.html | 2019-01-16T06:39:42 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
Magento for B2B Commerce, 2.2.x
Billing Agreements
The Billing Agreements grid lists all billing agreements between your store and its customers. The store administrator can filter the records by the customer or billing agreement information including billing agreement reference ID, status, and creation date. Each record includes general information about the billing agreement, and all sales orders that have used it as a payment method. The store administrator can view, cancel, or delete customer’s billing agreements. A canceled billing agreement can be deleted only by the store administrator.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/sales/billing-agreements.html | 2019-01-16T06:50:19 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
Visualize the robot in a simulator
Simulated Poppy Creatures
Simulated versions of all Poppy robots (Humanoid, Torso, and Ergo Jr) are available.
Connection with two main "simulators" were developed:
- using V-REP: a virtual robot experimentation platform
- using a 3D web viewer: lighter but without physics support:
- To discover and try the robot possibilities without having to spend real money.
- In a context where multiple users share a robot. For instance in a classroom where each group can work using the simulator and validate their program on a real robot.
- To design and run complex and time consuming experiments.
We try to make the switch from a simulated Poppy robot to the real one as transparent and as simple as possible. Most of the programming documentation is actually valid for both simulated and real robots. The chapter From simulation to real robot will guide you in the few steps to transform your program running in simulation to one working with a real robot.
If you want to use Poppy robots using a simulator you will have to install some of the poppy libraries locally on your computer.
Install the needed software
Info: A full section is dedicated on how to install everything locally for using a simulator:
- To have a working Python, we strongly recommend to use the Anaconda Python distribution. It works with any version >=2.7 or >=3.4. Prefer Python 2.7 if you can, as it is the version we used.
- To install the Poppy libraries: pypot and the library corresponding to your creature (e.g. poppy-ergo-jr).
Using V-REP:
- Poppy Humanoid
- Poppy Torso
- Poppy Ergo Jr
V-REP can be used to learn how to control motors, get information from sensors but also to interact with the simulated environment. It can be controlled using Python, Snap! or through the REST API. Here, are some examples of what the community has already been doing with it:
- A pedagogical activity to discover the different motor of your robot and how they can be controlled.
- A scientific experiment, where a Poppy Torso is learning how to push a cube on a table in front of it
Even if we try, to reproduce the robot behavior and functioning, some differences remain. In particular, if you make a robot walk in simulation that does not necessarily mean that it will walk in the real world (and vice-versa).
To start the simulated robot, first open V-REP and instantiate you robot with
simulator='vrep' argument. V-REP will open a popup that you will have to close to enable to communication between V-REP and Python.
from pypot.creatures import PoppyErgoJr robot = PoppyErgoJr(simulator='vrep')
If you want to control a simulated robot from Snap, you can also start it directly from the command line interface
poppy-services in your terminal (called command prompt on Windows):
poppy-services --vrep --snap poppy-ergo-jr
Using our web visualizer
Our web visualizer will show you a 3D representation of a Poppy robot. For this, you will need to connect it to either a real robot (through the REST-API) or to a simple mockup robot running on your computer. You simply have to set the host variable from within the web interface to match the address of your robot.
In Python, you can start the mockup robot with:
from pypot.creatures import PoppyErgoJr robot = PoppyErgoJr(simulator='poppy-simu')
Add a
use_snap=True argument if you want to start Snap API.
If you want to use command the mockup robot from Snap, you can also start it directly from the command line interface
poppy-services in your terminal (called command prompt on Windows):
poppy-services --poppy-simu --snap poppy-ergo-jr
As for V-REP, you can control your robot using Python, Snap!, or the REST API. Yet, there is no physics simulation so its lighter but you will not be able to interact with objects.
Here is an example with Python:
| https://docs.poppy-project.org/en/getting-started/visualize.html | 2019-01-16T06:02:45 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['../img/humanoid/vrep.png', 'Poppy Humanoid in V-REP'],
dtype=object)
array(['../img/torso/explauto-vrep.png', 'Torso V-REP'], dtype=object)
array(['../img/torso/explauto-res.png', 'Torso Explauto Res'],
dtype=object)
array(['../img/visu/presentation.png', 'Poppy Simu Presentation'],
dtype=object)
array(['../img/visu/python-setup.gif', 'Poppy Visu with Python'],
dtype=object) ] | docs.poppy-project.org |
Tcl8.6/Tk8.6 Documentation > Tcl C API, version 8.6.8 > TCL_MEM_MEM_DEBUG — Compile-time flag to enable Tcl memory debugging
DESCRIPTIONWhen Tcl is compiled with TCL_MEM_DEBUG defined, a powerful set of memory debugging aids is included in the compiled binary. This includes C and Tcl functions which can aid with debugging memory leaks, memory allocation overruns, and other memory related errors.
ENABLING MEMORY DEBUGGINGTo enable memory debugging, Tcl should be recompiled from scratch with TCL_MEM_DEBUG defined (e.g. by passing the --enable-symbols=mem flag to the configure script when building)..
GUARD ZONESWhen.
DEBUGGING DIFFICULT MEMORY CORRUPTION PROBLEMSNormally, is.
SEE ALSOckalloc, memory, Tcl_ValidateAllMemory, Tcl_DumpActiveMemory
KEYWORDSmemory, debug | http://docs.activestate.com/activetcl/8.6/tcl/TclLib/TCL_MEM_DEBUG.html | 2019-01-16T05:39:37 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.activestate.com |
8.5.100.06
Local Control Agent 8.5.x Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- Genesys Deployment Agent is no longer installed with LCA by default.
Resolved Issues
This release contains the following resolved issues:
LCA no longer terminates unexpectedly when monitoring a large number of processes. Previously, the internal process map was not cleared when processes completed, and the map table eventually exceeded the allocated size limit. As a result, LCA terminated unexpectedly. (MFWK-16483)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.100.06.
This page was last modified on June 25, 2015, at 10:58.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/lca85rn/lca8510006 | 2019-01-16T05:24:48 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.genesys.com |
From the find options in Maltego, you can search your current graph as well as saved graphs stored on your machine.
Quick Find
The Quick Find option on the investigate tab is a very handy tool to find something specific in a very large graph. The following toolbar will open at the bottom of your graph (the find toolbar can also be opened by clicking Ctrl + F:
You can now enter a search term, select the specific entity type or specify All (the whole graph) and you have the option to search all the Properties, Notes and Detail View.
Once you click the Find button, the relevant entities will be highlighted in the graph and the search hits will be listed in the Detail View. If you check the Zoom checkbox, then your graph will zoom to your results that match your search criteria.
Find in Files
Find in Files does exactly what the title suggests, it allows you to perform text searches on multiple Maltego graphs that are saved in a specified folder on your machine.
Clicking the Find in Files button open the window shown below:
Under the Where field you can specify the folder that you wish to search. This folder must include .mtgl and/or .mtgx graph files. The Browse button can be used to open a directory window where you can find the folder you wish to search. If the folder that you choose has multiple sub-directories that you also wish to search, then you must check the Recursive checkbox.
The Find input field allows you to specify your search term. The Case Sensitive checkbox can be used to choose whether the search should be case sensitive or not.
The options from the Graph items field will allow you to choose whether to search entities and/or links. It also allows you to limit your search to a specific entity type from the drop down menu.
Finally, the Search in field allows you to choose which of the entities text fields should be searched in. | https://docs.maltego.com/support/solutions/articles/15000010431-find-in-graph | 2019-01-16T06:44:11 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004135048/original/tfgRoBU4XYMA2yTfzh8SuCXQNWujoSDxnw.png?1528121094',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004135213/original/QGJ-2URy7493qPNd-WFZ7VqSkGrAdJ86jA.jpeg?1528121154',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004135295/original/I9MkKscEuAX9fwFAP7_sgJdNkb7TQJkydA.png?1528121178',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004135626/original/b6AF9K1woSiRg7rlL09Ya1WlYUQQ3sDD1Q.png?1528121278',
None], dtype=object) ] | docs.maltego.com |
Efficient and Soluble Expression of α Protein of Clostridium perfringens and Preparation
Animal Husbandry and Feed Science, 2017, 9(5): 284 -288, 305DOI : 10.19578/j. cnki. ahfs.2017.05.007
Bfici^nt and Soluble Expression of a Protein of Clos- fr/d/i/m perfr/ngens and Preparation of GeneticEng^ neering Subunit Vaccine
Sun Yu , Yang Lin , W ang Chuanbin , Dong Hao , Qu Ping , Zhao Bolin , Hu Dongmei , Yang Tianyi , Song Xiaohui"
China Animal Disease Control Center, Beijing 102600, China
Abstract
[Objective] The paper w as to develop genetic engneering vaccine that can express a exotoxin antigen protein efficiently without destroying its immuno-genicity for preventing and controlling the diseases caused by Clostridium pejfringens. [Method] Efficiently expressed soluble rec from Escherichia coli expression system by optimizing codon , removing signal peptide , selecting sequences with better hydrophilicity and antigenicity , and optimizing expression conditions. [Result] Mice obtained higher serum antibody level when immunized by a protein , and the immune protection rates a ainst type A , type B , type C and type D C . pejfringegs were 100% , 90% , 85% and 90% , respectively. The antibody titer of mice within 7 -14 d after the third immunization reached the peak. [ Conclusion ] The a protein h as good immunogenicity , and can be further used to develop genetic engineering subunit vaccines for preventing C. perfrin-
ge?g.
Key words COstridium pejfringens ; a protein ; Soluble expression and purification ; Genetic engineering subunit vaccine
Clostridium efringens is the common clostridium causing gas
gangrene in clinical. It produces large amounts of gas by decomposing sugars in muscles and connective tissues , and leads to severe emphysema of tissue , which further affects blood supply , causing large area necrosis of animal tissue[1]. Gas gangrene of ruminants , enterotoxaemia , hemorrhagic enteritis , sudden death syndrome of
sheep and cattle , lamb dysentery are caused by
C. perfringens^22.
a toxin is the most important exotoxin in all toxin genes of C. per- fnngegs , and type A , B , C , D and E bacteria can produce the toxin[ 52. The gene plc coding a toxin locates on chromosome ,
with the size of 1 194 bp and the molecular weight of 42. 5 Ku , which can encode 398 amino acids , and mature peptide and signal peptide consist of 370 and 28 amino acids , respectively. Currently ,scholars at home and abroad focus on function and pathogenesis of a toxin gene. a toxin hydrolyzes membrane phospholipid of cell membrane relying on sphingomyelinase and phospholipase C , which further destroys cell membrane structure and leads to rapid pyrolysis and death of cells ; meantime , the toxin is sensitive to pancreatin , and w ill easily lose activity after contact6-8]. a toxin gene is relatively conservative , although there are differences in 1.3% nucleotide and 1.2% amino acids sequence on average among different strains , different nucleotides and encoding amino acids do not affect the activity of a toxin its e l[ -2]. The promoter
of a toxin gene can be recognized by RNA polymerase of Esche
richia coli , so a toxin can be efficiently expressed in both C. per-fringegs and E. coli under the action of its promoter. Developing
genetic engineering subunit vaccine that can express a exotoxin antigen protein without destroying its immunogeeicity and can pre
Received : May 15 , 2017 Accepted : June 27 , 2017
Supported by the 13th Five-Year National Key Research and Development Pro
gram (2016YFD0500901).
"Corresponding author. E-mail : syl9830908@ 163. com
vent and control the diseases caused by a exotoxin of
C. perfrii-
gegs is a technical problem urgently to be solved[13].
1 Materials and Methods
1.1 Materials
pET30a fusion expression vector was purchased
from Novagen company ; BL21 ( D E 3) competent cells were brought from Beijing TmnsGen Biotech Co. , Ltd. Restriction en
zymes
BamH I , Xho I , T4 DNA ligase , 2 000 DNA Marker , SDS ,
IPTG , T 〇a PCR Master Mix were received from TaKaRa ( Dalian)
Engineering Co. , Ltd. Agarose , DNA Extraction K it , DNA rapid purification k it , plasmid rapid extraction k it were got from Beijing TransGen Biotech Co. , Ltd. Pre-dyed protein Marker was the product of Fermentas Cooperation. Protein purification column (nickel column 5 m L) , molecular sieve (Superdex 2000) were purchased from GE Company. HRP-labeled sheep anti mouse IgG , Freund’s complete adjuvant and incomplete adjuvant were manufactured by Sigma Company. Target gene sequence of fusion protein was synthesized by BGI Biological Technology Co. , Ltd.
C. perfringem including type A C57-10 ,
type B C58-5 , type C
C59-4 and type D C60-11 were products of China Institute of Vet-
erinam Drug Control.
1.2 Methods
1.2.1 Synthesis and primer design of a recombinant gene. a re
combinant gene was designed by removing protein signal peptide and optimizing codon sequence. A pair of primers was designed according to the sequence of a recombinant gene , while
BamH I
and Xho I restriction sites wereadded at 5’ and 3’ ends of primers respectively , for amplification of target fragment ( underlined parts are restriction sites).
Forward primer F : 5,-GGATCCATGTTTTGGGACCCGGA-
CACCGAC-3,;Reverse primer R : 5,-CTCGAGTTATTTGATGTTATAGGT-
GCTGT-3’.
Efficient and Soluble Expression of α Protein of Clostridium perfringens and Preparation的相关文档搜索
推荐阅读
- 批发零售印刷物资项目可行性研究报告评审方案设计(2013年发改委立项详细标准+甲级案例范文)
- 【精品模板】精选PPT模板 (3)
- 三座标测量仪编程作业指导书
- RHCE考试题与解答
- “美国高血压预防、检测、评估与治疗联合委员会第8次报告”解析
- 知识经济时代的企业文化建设
- sqlserver2000迁移到oracle
- PISA科学试题(温室效应)
- 差分方程及其Z变换法求解
- VOCALOID v家英文介绍
- 对作弊学生的教育
- 行政事业单位会计内部控制规范讲座
- AOC L2262WA液晶显示器维修手册图纸
- 40篇英语演讲稿
- 图像传感器应用
- 日菜品意见反馈表(报)
- 2015年07月安徽微车互联网营销数据报告
- 一生一定要去的20个最美地方
- dB,DBi,DBd, DBm的含义和区别
- Quartus II安装和破解
- 北师大版小学数学四年级上册期中测试题1
- 高一物理下学期第四次统考试题(无答案)
- 伊利股份600887一季度财报2016年
- 2016-2022年中国代理记账行业规模现状及十三五竞争战略研究报告(目录)
- 用route命令解决Wifi和网卡不能同时上内外网问题
- 中国风水墨风格
- 中国名陶瓷赏析试题与答案 | http://www.360docs.net/doc/info-735b794d905f804d2b160b4e767f5acfa1c783ae.html | 2019-01-16T05:52:12 | CC-MAIN-2019-04 | 1547583656897.10 | [] | www.360docs.net |
New in version 2.3.
The below requirements are needed on the host that executes this module.
# Create a source and destination nat rule - name: create nat SSH221 rule for 10.0.1.101 panos_nat: ip_address: "192.168.1.1" password: "admin" rule_name: "Web SSH" from_zone: ["external"] to_zone: "external" source: ["any"] destination: ["10.0.0.100"] service: "service-tcp-221" snat_type: "dynamic-ip-and-port" snat_interface: "ethernet1/2" dnat_address: "10.0.1.101" dnat_port: "22" commit: False
This module is flagged as deprecated and will be removed in version 2.9. For more information see DEPRECATED. | https://docs.ansible.com/ansible/latest/modules/panos_nat_policy_module.html | 2019-01-16T07:02:51 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.ansible.com |
This page provides a tutorial for using the V-Ray Ptex Map to render Mudbox PTex textures.
Overview
Sculpting the Object in Mudbox
Open your object in Mudbox. In this case, we are just going to use one of the stock head models.
To prepare the object for PTex paining, choose Mesh > PTEX Setup... and adjust the desired resolution of the PTex textures. In this example, we just used the default settings.
Painting the Object
Start painting the object as you would normally. When you start painting a new layer, the Create New Paint Layer dialog will appear. For diffuse/specular/bump textures, an 8-bit PTex format will usually be enough. For our example, we just painted one diffuse layer.
Sculpting the Object
Sculpt the object as you would normally, adding subdivision levels as needed.
Here is an example of the final painted and sculpted object in Mudbox:
Exporting the Base Object and the PTex Maps from Mudbox
Exporting the Object
Go back to the 0-level resolution of the object, select it, and export it to an .obj file from File > Export Selection ...
Extracting the Paint Layers
Right-click on the paint layer you want to extract, select Export selected..., and choose a name for the resulting PTex file.
Extracting the Displacement Map
Go to Maps > Extract Texture Maps > New operation ... and in the following dialog select Displacement Map.
For Target Models select the 0-level resolution of the object; for Source Models select the resolution that you want to bake (typically the highest resolution).
Turn on the Smooth Target Models option.
Make sure the Method is set to Subdivision (the Ray Casting method may produce artifacts);
Select a PTex file for the Base File Name option.
Set the Data Format to 32-bit float. This ensures that the displacement values stored in the texture are absolute and do not need further adjustment.
This is what the final extraction options should look like:
Click on Extract button to create the vector displacement map.
Rendering the Object with V-Ray
Importing the Object
In 3ds Max, go to the File > Import... menu and import the .obj file with the 0-level resolution mesh for your object. Make sure that the Retriangulate polygons option in the OBJ importer is disabled.
Create a new VRayMtl, assign it to the object, and attach a VRayPtex texture to the diffuse slot. Choose the saved diffuse PTex file in the texture.
Note that the texture will not appear properly in the Material Editor as the preview sphere has a different topology from our object.
If you render the object now, it should be able to access the diffuse texture:
Add a Displacement modifier to the object and set its Type to Subdivision. Click onTexmap button and select a VRayPtex texture. Instance the newly created VRayPtex texture into the Material Editor and choose the displacement PTex file. Leave the displacement Amount in the modifier to the default value of 1 generic unit.
For the following rendering, we disabled the diffuse texture so that we can better see the displacement result:
The texture seems to be clipped in the lowest and highest parts. This is because V-Ray assumes by default that the texture values are between black and white, which is not correct in this case. To work around this, go to the VRayDisplacementMod modifier, and adjust the Texmap min and Texmap max parameters at the bottom of the rollout. In our case, through some trial and error, we found that using -3.0 for the minimum value and 4.0 for the maximum removes the clipping:
For smoother results, especially if you observe the displacement map from a close distance, it might be useful to set the Filter parameter of the VRayPTex texture to a smooth filter like the Bicubic one.
For comparison, the following images show close-ups of displacement with the Box and Bicubic filters.
Box filter. The pixels in the PTex file are clearly visible.
Bicubic filter. The result is much smoother.
Final Rendering
Now we can enable the diffuse texture and get our final result: | https://docs.chaosgroup.com/display/VRAY4MAX/Rendering+PTex+Textures+from+Mudbox | 2019-01-16T06:21:37 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.chaosgroup.com |
Activity Block
Contents
You can add the Activity block to Self Service or Assisted Service phases to start or stop activity in a report. You can also nest activities to provide additional details.
Do not use Activity blocks for modules, as Designer reports module activity automatically..
Stop Tab
Click Stop to indicate this block is the end of the activity.
Enter information in the following fields: Call Result, Call Result Reason, and Call Result Notes.
Next, click Add Pair to include data, values, or variables to store in the metric data of the activity.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/DES/latest/Help/Activity | 2019-01-16T05:40:11 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.genesys.com |
Before managing the objects in your vSphere inventory by using Orchestrator and to run workflows on the objects, you must configure the vCenter Server plug-in and define the connection parameters between Orchestrator and the vCenter Server instances you want to orchestrate.
You can configure the vCenter Server plug-in by running the vCenter Server configuration workflows from the Orchestrator client.. | https://docs.vmware.com/en/vRealize-Orchestrator/7.1/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-C2EC619C-EAB0-43BB-98E4-7E54C6AA4CFD.html | 2019-01-16T05:25:58 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
InputEventJoypadMotion¶
Inherits: InputEvent < Resource < Reference < Object
Category: Core
Brief Description¶
Input event type for gamepad joysticks and other motions. For buttons see
InputEventJoypadButton.
Description¶
Stores information about joystick motions. One
InputEventJoypadMotion represents one axis at a time.
Tutorials¶
Property Descriptions¶
Axis identifier. Use one of the
JOY_AXIS_* constants in @GlobalScope.
Current position of the joystick on the given axis. The value ranges from
-1.0 to
1.0. A value of
0 means the axis is in its resting position. | https://godot.readthedocs.io/en/latest/classes/class_inputeventjoypadmotion.html | 2019-01-16T07:13:54 | CC-MAIN-2019-04 | 1547583656897.10 | [] | godot.readthedocs.io |
overview
The darkroom view is where you develop your images. The center panel contains the image currently being edited.
🔗zoom
Middle-click on the center panel cycle between “fit to screen”, 1:1 and 2:1 zoom.
Alternatively you can zoom between 1:1 and “fit to screen” by scrolling with your mouse. Scroll while holding the Ctrl key to extend the zoom range to between 2:1 and 1:10. | https://docs.darktable.org/usermanual/3.6/en/darkroom/overview/ | 2022-05-16T21:44:56 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.darktable.org |
Apache Airflow
You can easily integrate Exasol into Apache Airflow, a platform to programmatically author, schedule, and monitor work flow. This section provides you with information on how to connect the Exasol database to Apache Airflow.
Prerequisite
- Exasol JDBC driver installed on the Apache Airflow server. You can download the driver from the Exasol downloads section.
Installation and Configuration of Airflow
In this document, we will provide you with steps to set up an instance of Airflow running on CentOS 7.
- Install CentOS 7 on a local or virtual machine and select Server GUI as a base. Perform an update, for example:
- Next, install the EPEL (Extra Packages for Enterprise Linux) repository on CentOS, an additional package repository that provides easy access to installing packages for commonly used software. To install the EPEL repository, run the following command:
- Install Pip, which is a tool for installing and managing Python packages.
- Install Apache Airflow "Common" (such as MySQL, Celery, Crypto, or Password auth).
- Create a new directory “airflow” in the home (~) directory, set it as airflow home, and install the airflow in it:
- Install Airflow. For more information, see Apache Airflow Installation.
- Next, initialize the database. This will create the necessary configuration files in the Airflow directory.
- Start the Airflow web server as daemon, and the default port of 8080.
- Start the Airflow scheduler as daemon.
- Once the Airflow web server and scheduler are running successfully, you can access the Airflow Admin UI. Open a browser and enter the following details:
- Next, create a connection to connect Airflow to external systems. To create a new connection using the UI, navigate to the Admin console in the browser and select Connection > Create. The following screen is displayed:
sudo yum -y update
uname -a
Linux centos7.exasol.local 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May
14 21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
sudo yum -y install gcc gcc-c++ libffi-devel mariadb-devel pythondevel
sudo pip install psutil
sudo pip install --upgrade setuptools
sudo pip install 'apacheairflow[async,celery,crypto,jdbc,mysql,password,rabbitmq]'
mkdir -p /usr/opt/airflow
groupadd -g 500 airflow
useradd -g 500 -u 500 -M -d /usr/opt/airflow/ airflow
chown -R airflow:airflow /usr/opt/airflow/
In the example, we have created the Airflow Home directory in the following location - /usr/opt/airflow.
Additionally, we have created a group called Airflow and changed the owner to this group with all the relevant permissions.
This is an optional step.
The first time you run Airflow, it will create a file called airflow.cfg in your $AIRFLOW_HOME directory (~/airflow by default). This file contains Airflow’s configuration and you can edit it to change any of the settings. You can also access this file via the UI by navigating to Admin > Configuration menu.
Enter the following details:
- Conn Id: The ID of the connection for reference within the Airflow DAGs.
- Conn Type: The type of connection. In this case, it is JDBC Connection.
- Connection URL: Enter the connection URL. The JDBC driver uses this URL structure - jdbc:exa:<host>:<port>[;<prop_1>=<value_1>]...[;<prop_n>=<value_n>].
- Driver Path: The location where the driver is installed on the Airflow Server.
- Driver Class: The main class for the Exasol driver. For example - com.exasol.jdbc.EXADriver.
Test the Connection
You can test the connection by running an Ad Hoc query. The Ad Hoc query enables simple SQL interactions with the database connections registered in Airflow.
In the Admin console, navigate to Data Profiling > Ad Hoc Query. Select the Exasol Connection you created and execute any SQL query to test the connection.
Create DAGs
A DAG (Directed Acyclic Graph) is a collection of all the tasks you want run in an organized way. A DAG is defined in a Python script, which represents the DAGs structure (tasks and their dependencies) as code.
You can create a DAG by defining the script and adding it to a folder, for example "dags", within the $AIRFLOW_HOME directory. In our case, the directory to which we need to add DAGs is user/opt/airflow/dags.
Example:
The following is an example DAG that connects to an Exasol database and run simple SELECT/IMPORT statements:
from airflow import DAG
from airflow.operators.jdbc_operator import JdbcOperator
from datetime import datetime, timedelta
# Following are defaults which can be overridden later on
default_args = {
'owner': 'exasol_test',
'depends_on_past': False,
'start_date': datetime(2019, 5, 31),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG('Exasol_DB_Checks', default_args=default_args)
# sql_task1 and sql_task2 are examples of tasks created using operators
sql_task1 = JdbcOperator(
task_id='sql_cmd',
jdbc_conn_id='Exasol_db',
template_searchpath='/usr/opt/airflow/templates',
sql=['select current_timestamp;',
'select current_user from DUAL;',
'insert into TEST.AIRFLOW values(current_timestamp, current_user,
current_session);'],
autocommit=False,
params={"db":'exa_db_61'},
dag=dag
)
sql_task2 = JdbcOperator(
task_id='sql_cmd_file',
jdbc_conn_id='Exasol_db',
template_searchpath='/usr/opt/airflow/templates',
sql=['check_events.sql','check_usage.sql','insert_run.sql'],
autocommit=False,
params={"db":'exa_db_61'},
dag=dag
)
sql_task2.set_upstream(sql_task1)
As you can see in the above example, you can perform direct SQLs or call external files with the .sql extension. | https://docs.exasol.com/db/latest/connect_exasol/workflow_management/apache_airflow.htm | 2022-05-16T22:33:34 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.exasol.com |
You're reading the documentation for a version of ROS 2 that has reached its EOL (end-of-life), and is no longer officially supported. If you want up-to-date information, please have a look at Galactic.
ROS 2 Documentation learning.
Learn more.
About ROS 2.
Project governance is handled by the Technical Steering Committee, which you can learn more about here.
Marketing materials promoting ROS 2 can be downloaded from this page.
About this documentation. | https://docs.ros.org/en/eloquent/ | 2022-05-16T22:06:00 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.ros.org |
Title
Towards De-Garrisonisation in Jamaica: A Place for Civil Society
Document Type
Article
Publication Date
2010
Abstract
For nearly 50 years, powerful politically connected criminal actors called ‘dons’ (or area leaders) have occupied – Mafia style – some of Jamaica's deprived urban communities, and enacted new, outlaw forms of community leadership. In these communities, notoriously labelled ‘garrisons’, dons have ‘manufactured consent’ for their illicit rule, using coercive tactics and by positioning themselves as legitimate civic leaders. In the process, these rogue actors have not only gained acceptance among significant numbers of the subaltern class but also (tacit) political recognition in the wider society. Genuine civil society has been eclipsed in Jamaica's urban garrisons due to the persistence of this rogue leadership. Still, a more hopeful outlook for Jamaica may be possible. Drawing upon previous research outlining the widespread struggle against the Mafia led by members of Italian civil society, and the ensuing decline in its omnipotence in that country, the paper considers the implications of the positive developments in Italy for the noticeable movement towards degarrisonisation in Jamaica, and contemplates what role a resurrected Jamaican civil society might play in this process.
Recommended Citation
Johnson, Hume. 2010. "Towards De-Garrisonisation in Jamaica: A Place for Civil Society." Crime Prevention and Community Safety 12 (1): Februrary.
Published in: Crime Prevention and Community Safety, Volume 12, Issue 1 | https://docs.rwu.edu/fcas_fp/185/ | 2022-05-16T21:30:38 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.rwu.edu |
- 07 Feb 2022
- 8 Minutes to read
-
- DarkLight
ERC-721 token
- Updated on 07 Feb 2022
- 8 Minutes to read
-
- DarkLight
An ERC-721 smart contract is used to create non-fungible tokens and bring them to the blockchain.
The process of creating an ERC721 has a few distinct phases. The smart contract sets define one such a process which is what we describe below. This is by no means the only way to run your ERC721 project, if you plan not to follow the playbook below, you can use it to setup your own flow easily.
Phase 0: Image generation
Generative Art
The image generation code for the generative art set is based on the Hashlips Art Engine, please check out the
README file in the
art_engine folder on the usage instructions.
In short, replace the images in the
art_engine/layers folder, change the settings in the
art_engine/src/config.js file, and run
yarn artengine:build to generate your images. Rinse and repeat until you are happy with the result. Note that the generated images are randomized
to prevent streaks of similar images, this can be configured in the
art_engine/src/config.js file.
If you want to use the engine to generate a preview image run
yarn artengine:preview for a static image and
yarn artengine:preview_gif for a gif.
Using
yarn artengine:rarity you can check the rarity of each generated image.
If you want to pixelate your images, use
yarn artengine:pixelate, the settings are again in the
art_engine/src/config.js file.
Not that the generated metadata does not have a real base uri set, after we have uploaded everything to IPFS, we can set it in the
art_engine/src/config.js file and update all the metadata using
yarn artengine:update_info.
The end result looks like this:
{ "name": "thumbzup #419", "image": "ipfs://bafybeihroeexeljv5yoyum2x4jz6riuqp6xwg6y7cg7jaumcdpyrjxg5zi", "attributes": [ { "trait_type": "background", "value": "yellow" }, { "trait_type": "body", "value": "thumb" }, { "trait_type": "face", "value": "happy" }, { "trait_type": "hair", "value": "long brown hair" }, { "trait_type": "accessories", "value": "sunglasses" } ] }
Trading Cards
The image generation code for Trading Cards is based on the a Hardhat task found in the
tasks folder. This task is written especially for the
cards for this example project, but it should be fairly simple to adapt it to your needs.
In short, replace the images in the
assets/layers folder, change the logic in the
task/generate-assets.ts file. To generate the trading cards execute
yarn artengine:build --common 10 --limited 5 --rare 2 --unique 1 --ipfsnode <key of your ipfsnode>. The ipfs node key can be found in
.secrets/default.hardhat.config.ts.
The end result would look like this:
{ "name": "Aiko (#1/1)", "description": "Aiko can express more with his tail in seconds than his owner can express with his tongue in hours.", "image": "ipfs://bafybeia5truvedhrtdfne3qmoh3tvsvpku6h4airpku6eqvcmrfoja7h4m", "attributes": [ { "trait_type": "Serial Number", "value": 1, "max_value": 1, "display_type": "number" }, { "trait_type": "Breed", "value": "English Cocker Spaniel" }, { "trait_type": "Shedding", "value": 3, "max_value": 5, "display_type": "number" }, { "trait_type": "Affectionate", "value": 5, "max_value": 5, "display_type": "number" }, { "trait_type": "Playfulness", "value": 3, "max_value": 5, "display_type": "number" }, { "trait_type": "Floof", "display_type": "boost_number", "value": 100 }, { "trait_type": "Birthday", "value": 1605465513, "display_type": "date" } ] }
Phase 1: Initial Setup
The first step of the process is to deploy the ERC721 contract, and claim the reserve tokens.
Reserves are an initital amount of tokens that are created at the start of the sale. This is used
typically to generate tokens for the team members and to mint tokens for later use (e.g. for marketing
purposes).
During this setup phase, some of the important parameters of the sale and collection are set. In the
contract look for the
Configuration section and tweak the parameters as needed.
////////////////////////////////////////////////////////////////// // CONFIGURATION // ////////////////////////////////////////////////////////////////// uint256 public constant RESERVES = 5; // amount of tokens for the team, or to sell afterwards uint256 public constant PRICE_IN_WEI_WHITELIST = 0.0069 ether; // price per token in the whitelist sale uint256 public constant PRICE_IN_WEI_PUBLIC = 0.0420 ether; // price per token in the public sale uint256 public constant MAX_PER_TX = 6; // maximum amount of tokens one can mint in one transaction uint256 public constant MAX_SUPPLY = 100; // the total amount of tokens for this NFT
Furthermore, the collection will be launched without exposing any of the metadata or art, leaving the
reveal for after the public sale. In the
assets/placeholder folder, modify the artwork and metadata
which will be exposed until the reveal.
Also make sure to go through the files in the
deploy folder to change any of the values to match your project.
When you are happy with the setup, you can deploy the contract and claim the reserves by running.
yarn smartcontract:deploy:setup
Phase 2: Building the whitelist
To have a successful launch, you will engage in a lot of marketing efforts and community building. Typically
before engaging in the actual sale, various marketing actions are taken to build a whitelist. This list
is to allow people to buy in before the public sale. Allowing a person on the whitelist should be close to a
concrete commitment to the sale.
Thw whitelist process is built to be very gas efficient using Merkle Trees. You start by filling the
assets/whitelist.json file
with the addresses of the whitelisted participants and they amount they can buy in the pre-sale.
When you have enough commitments we will built the Merkle Tree, generate all the proofs and stire the Merkle Root
in the contract.
yarn smartcontract:deploy:whitelist
This will export the proofs needed to include in your dAPP in the
./assets/generated/whitelist.json file. Your dAPP
will provide a page where the participants connects their wallet to. Using the address of the wallet, you can load the
proofs and allowances from this JSON file. The dAPP will then configure a form where the participant can choose,
with a maximum of their allowance, how many tokens they want to buy. Pressing the submit button will trigger a transaction
to the
whitelistMint function with all the parameters filled in and the correct amount of ETH/MATIC/etc as a value.
The user signs this transaction in their wallet and the transaction is sent to the network.
To display the state of the sale, the items minted, the items left, use the GraphQL endpoint from the The Graph node you can
launch in the SettleMint platform.
Phase 3: Opening up the pre-sale
As soon as you execute the following command, the pre-sale is live.
yarn smartcontract:deploy:presale
Phase 4: Opening up the public sale
As soon as you execute the following command, the pre-sale is terminated and the public sale is live.
yarn smartcontract:deploy:publicsale
Phase 5: The big reveal
At some point during the process, you will want to reveal the metadata. Some projects choose to reveal immediately, others choose to
reveal after the whitelist sale, and others will wait until a point during the public sale or even after it has concluded.
Revealing the metadata is done by switching the baseURI to the final IPFS folder with setBaseURI. This can be followed up by running the following to freeze the metadata and prevent further changes.
yarn smartcontract:deploy:reveal | https://docs.settlemint.com/docs/polygon-erc-721-token | 2022-05-16T21:54:03 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.settlemint.com |
The Favorites feature is one of the ways to bookmark your favorite Playbooks and Cards. If there's a Playbook or Card that you want to mark as favorite, simply click the heart!
To see all the Playbooks and Cards you have marked as favorite, click the "Favorites" filter on the side menu of the all Playbooks or all Cards view.
| http://docs.kiite.ai/en/articles/2816735-what-does-favoriting-a-playbook-or-card-do | 2020-07-02T08:53:08 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['https://downloads.intercomcdn.com/i/o/109208719/42b015f3e1e03d86ee57c9e5/Screen+Shot+2019-03-15+at+5.09.25+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/109209571/3b594fc39f8b4a88ee925ad9/Screen+Shot+2019-03-15+at+5.13.51+PM.png',
None], dtype=object) ] | docs.kiite.ai |
TOPICS×
Mask data
Mask:
- You can type “re:” in the search box or bar to have the search phrase interpreted as a regular expression. You can use any of the syntax associated with regular expressions in your search phrase. For more information about regular expressions, see the Regular Expression appendix in the Dataset Configuration Guide .
- You can type the $ symbol as the first character in your search string to find phrases that begin with the string you entered, or as the last character to find phrases that end with the string you entered.
- You can type a space as the first character in your search string to find any words within a phrase that begin with the string you entered, or as the last character to find any words within a phrase that end with the string you entered.
Following are examples of different ways to mask a table using the string “on” in a search:
- Typing “on” displays every phrase that contains the string “on” anywhere in the phrase: “ on line banking,” “c on tact buyers,” “bulli on coins,” “bank on line,” “gold opti on s,” and “silver bulli on .”
- Typing “$on” displays every phrase that begins with the string “on”:“ on line banking” and “ on -line payment.”
- Typing “on$” displays every phrase that ends with the string “on”:“silver bulli on ” and “gold opti on .”
- Typing “on” displays every phrase that contains a word that begins with the string “on”:“ on line banking” and “bank on line.”
- Typing “on” displays every phrase that contains a word that ends with the string “on”:“bulli on coins” and “silver bulli on .”
- Using “on” displays every phrase that contains the string “on” as a word:“ on line banking” and “bank on line.” | https://docs.adobe.com/content/help/en/data-workbench/using/client/analysis-visualizations/tables/c-mask-data.html | 2020-07-02T10:41:20 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.adobe.com |
Collections enable you to group API proxies, targets, or dev apps and set up appropriate alert threshold values for all members of the group to diagnose issues faster.
Note: You can add a maximum of 20 items to a collection.
For example, you might want to group the following components:
- High or low traffic API proxies or backend targets
- Target locations
- Target API product
- Target teams
- Target developer apps
After you have defined a collection, you can:
- Filter by a collection on the Recent dashboard.
- Set up an alert based on a collection.
View collections
To view collections that are currently defined, click Analyze > API Monitoring > Collections in the Edge UI.
The Collections page displays, as shown in the following figure:
As highlighted in the figure, the Collections page enables you to:
- View a summary of the collections that are currently defined
- Create a collection
- Edit a collection
- Delete a collection
- Search the list of collections for a particular string
Create a collection
To create a collection:
- Click Analyze > API Monitoring > Collections in the Edge UI.
- Click + Collection.
- Select the type of collection: Proxy, Target, or Developer App.
- Select the environment from the drop-down.
- Click Next.
- Enter a name and description for the collection.
- Click Add Proxies, Add Targets, or Add Developer Apps to add items to the collection.
- Add or remove items from the collection, as follows:
- To add a single item, click its name on the drop-down list.
- To add multiple items, click Add multiple and then select the desired items when prompted and click Add.
You can add a maximum of 20 items to a collection. A maximum of 15 items are displayed at one time. Use the search field to filter the list of items.
- To remove an item, position your cursor over the item and click x.
- Click Save to save the collection.
Edit a collection
To edit a collection:
- Click Analyze > API Monitoring > Collections in the Edge UI.
- Click the name of the collection in the list.
- To edit the name or description, click
and modify the fields.
- To add items to the collection, click
.
- To remove an item, position your cursor over the item and click x.
- Click Save to save your edits.
Delete a collection
To delete a collection:
- Click Analyze > API Monitoring > Collections in the Edge UI.
- Position your cursor over the collection that you want to delete to display the actions menu.
- Click
.
Set up an alert for a collection
After you create a collection, you can create an alert, just as you can for an individual proxy, target, or developer app. For more on alerts, see Set up alerts and notifications.
For example, you create a collection of APIs named myApiCollection. You then want to create a 5xx status code alert for the API proxy collection.
The following example shows how to set up an alert using the UI that is triggered when the transactions per second (TPS) of 5xx status codes for any API in the collection exceeds 100 for 10 minutes for any region:
| https://docs.apigee.com/api-monitoring/collections?hl=cs | 2020-07-02T09:24:27 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['https://docs.apigee.com/api-monitoring/images/collections.png?hl=cs',
'Collections'], dtype=object)
array(['https://docs.apigee.com/api-monitoring/images/collection-alert.png?hl=cs',
None], dtype=object) ] | docs.apigee.com |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
Navigation Fields
Navigation Fields allow you to quickly navigate between related code fragments. They highlight related code fragments and allow you to easily tab through them in both directions. Once you've tabbed to a link, the target code fragment is automatically selected so you can immediately start typing over it.
For instance, you can navigate among calls to a certain method. Place the cursor onto a method call or declaration, and press SHIFT+ALT+U. This will highlight all calls to the appropriate method and allow you to easily tab through them.
Navigation Fields provide the following features:
Navigation Fields are accompanied by a shortcut hint.
Feedback | https://docs.devexpress.com/CodeRush/6078/visual-elements/navigation-fields | 2020-07-02T10:45:10 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['/CodeRush/images/navigationfields9778.png', 'NavigationFields'],
dtype=object) ] | docs.devexpress.com |
link
The
link data type represents a link to another object. This is the primary data type for Haplo objects, as every thing and concept is represented as an object.
Common properties
All values will have these properties.
property type
"link"
property ref
The string representation of the Ref of the linked object.
property title
The human readable
title of the linked object. This is a string which is suitable for presenting to users as the ‘name’ of this object.
Optional properties
The inclusion of other properties depends on the linked object.
property behaviour
The Behaviour of the linked object, if it has one.
Where the object represents a concept or a structural element, such as project types or organisational units, the behaviour is easier to use to identify the object than the
ref.
property code
If the linked object is in the
dc:attribute:type attribute and is a schema object representing the object’s type, this property contains the API code of the type.
This may be more conveniently available in the
type top level property.
Sources
Sources may provide additional properties if included. This section lists commonly used sources, but is not an exhaustive list. Refer to the sources documentation for the full list.
property username
If the
std:username source is included, and a user is represented by the linked object, the
username property will contain the username of the user, obtained from the user’s
username tag. | https://docs.haplo.org/dev/serialisation/attribute/value/link | 2020-07-02T09:37:40 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.haplo.org |
SSTables 3.0 Summary File Format¶
New in version 3.0.
SSTables summary file contains samples of keys that are used for the first, coarse-grained, stage of search through the index. Every summary entry points to a sequence of index entries commonly referred to as an index page where the key looked for is located. So with the summary, it is possible to only read and search through a small part of the index.
The summary file is meant to be read and stored entirely in memory, therefore it is aimed to be reasonably small. This incurs some trade-off between the size of the summary and the cardinality of an index page (number of index entries covered by it). The less the cardinality is, the better precision the initial lookup through summary gives, but this also tends to increase the size of the summary. The balance between two is regulated by the sampling level which rules how many index entries should one summary entry cover.
SSTables Summary File Layout¶
The summary file format in SSTables 3.0 has minimal changes compared to the previous version of the format described here: SSTable 2.0 format The only noticeable change is that in 3.0 the summary file does not store information about segment boundaries. See CASSANDRA-8630 for more details about this change.
Other than that, the data format remains the same. Note that, unlike data and index files, the summary file does not make use of variable-length integers:
struct summary { struct summary_header header; struct summary_entries_block summary_entries; struct serialized_key first; struct serialized_key last; };
Summary Header¶
First goes the summary header which is laid out as follows:
struct summary_header { be32 min_index_interval; be32 entries_count; be64 summary_entries_size; be32 sampling_level; be32 size_at_full_sampling; };
The
min_index_interval is a lower bound for the average number of partitions in between each index summary entry. A lower value means that more partitions will have an entry in the index summary when at the full sampling level.
The
entries_count is the number of offsets and
entries in the
summary_entries structure (see below).
The
summary_entries_size is the full size of the
summary_entries structure (see below).
The
sampling_level is a value between 1 and
BASE_SAMPLING_LEVEL (which equals to 128) that represents how many of the original index summary entries
((1 / indexInterval) * numKeys) have been retained. Thus, this summary contains
(samplingLevel / BASE_SAMPLING_LEVEL) * ((1 / indexInterval) * numKeys)) entries.
The
size_at_full_sampling is the number of entries the Summary would have if the sampling level would be equal to
min_index_interval.
Summary Entries¶
struct summary_entries_block { uint32 offsets[header.entries_count]; struct summary_entry entries[header.entries_count]; };
The
offsets array contains offsets of corresponding entries in the
entries array below. The offsets are taken from the beginning of the
summary_entries_block so
offsets[0] == sizeof(uint32) * header.entries_count as the first entry begins right after the array of offsets.
Note that
offsets are written in the native order format although typically all the integers in SSTables files are written in big-endian. In Scylla, they are always written in little-endian order to allow interoperability with 1. Summary files written by Cassandra on the more common little-endian machines, and 2. Summary files written by Scylla on the rarer big-endian machines.
Here is how a summary entry looks:
struct summary_entry { byte key[]; // variable-length. be64 position; };
Every
summary_entry holds a key and a
position of the index entry with that key in the index file.
Note that
summary_entry does not store key length but it can be deduced from the offsets. The length of the last
summary_entry in the entries array can be calculated using its offset and
header.summary_entries_size.
Keys¶
The last two entries in the
ummary structure are
serialized_keys that look like:
struct serialized_key { be32 size; byte key[size]; };
They store the first and the last keys in the index/data file.
References. | https://docs.scylladb.com/architecture/sstable/sstable3/sstables_3_summary/ | 2020-07-02T09:47:01 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.scylladb.com |
Help talk:Editing policies
From SNIC Documentation
This page is intended for informal discussion regarding topics in the article, and uses the exact same text format. While we do not try to impose any kind of order or structure upon this page, please try to keep your additions constructive and readable. It is also! | https://docs.snic.se/wiki/Help_talk:Editing_policies | 2020-07-02T08:51:07 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.snic.se |
In collateral health checks, we introduce the concept of the Collateral pool. In this section, we introduce another, the Reserve pool. The Reserve Pool is the place where VELO tokens (or any collateralizable assets) not in circulation resides. The Reserve Pool is used to ensure that the value of stable credit stays the same relative to the pegged fiat currency.
Rebalancing manages the flow of collateral between the Collateral Pool and the Reserve Pool. When the collateral value in the Collateral Pool is not enough to maintain the 1:1 value to outstanding stable credit, a rebalancing operation will transfer collateral in the Reserve Pool to the Collateral Pool. This process creates a balance and ensures that the Collateral Pool has a sufficient amount of collateral when users decide to redeem stable credit. | https://docs.velo.org/basic-concepts/rebalancing-the-reserve-and-collateral-pool | 2020-07-02T10:04:34 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.velo.org |
Can I set a User for Post Submitted by Unregistered Users?
This question was asked on the wordpress.org forum
First the Basics
With BuddyForms you can allow unregistered users to submit a post. You have several options on how to manage the new user. You can create the user during the submission or assign the user to a default author.
Please see the Documentation on How to Create New User During Post Submission
This video is an in deep walkthrough over all the options needed for an unregistered user
Now let us answer the question
"Can I set a User for Post Submitted by Unregistered Users?"
It's not working out of the box. We need to create a custom function and add it to the function.php Let me describe the process step by step.
First, we need to create a new hidden form element to add a user id.
Second, we need to add a code snipped to our function.php in the theme or as a separate plugin to auto-assign the author during submission.
See the video to understand the complete process | https://docs.buddyforms.com/article/638-can-i-set-a-user-for-post-submitted-by-unregistered-users | 2020-07-02T08:28:19 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55b4a8bae4b0b0593824fb02/images/5e5d13132c7d3a7e9ae887fa/file-NkjJVjkMYg.png',
None], dtype=object) ] | docs.buddyforms.com |
How to manage change: a Do-It-Yourself Guide
People typically find change difficult and various studies show that a large portion of CRM implementation projects fail, often due to poor user acceptance of the new platform. In the corporate world, consulting companies charge hefty fees to help you manage change: they sell workshops, offer project management services and support for the leadership team.
RunMags is all about cost effectiveness and our mission is to make magazine publishing manageable and profitable. That’s why we have created this Do-It-Yourself Guide on how to manage change within your organization and thereby saving money that otherwise would be spent on consultants.
Background
To support our customers in implementing RunMags in their organization, we have developed a RunMags specific change management methodology and best practices. As RunMags is really a CRM, marketing, fulfillment and billing system, the method is certainly focused on managing the challenges concerning implementing an IT platform, changing business processes, and motivating staff, but the foundation is generic in nature and can be used for virtually any change project.
Use this guide to avoid the pitfalls often associated with an organizational change, the implementation of a CRM platform or the roll out of your new sales rep bonus scheme.
Pragmatic approach
As organization structure, culture and history varies from publisher to publisher, managing the change process when introducing a new company-wide software tool is bespoke by default. In a hierarchical organization, the change can be dictated in a very different manner than in an organization with a more informal management culture. But that doesn’t mean that it will last. It might just become the new flavor of the month that the staff adapts to, knowing that in six months, there will be a new thing.
The RunMags change management methodology is created to provide a framework that can be used in a very pragmatic fashion, regardless of which organization we’re implementing our software in. Key is that management is aware of the challenges and actively communicates to support the change throughout the various stages.
We must certainly manage the technical aspects of rolling out the software, but the softer side of the change is equally important - and this is what most organizations miss. In order to be successful, you absolutely HAVE to understand the personal drivers of your staff and align them with your change project.
Change Acceleration Process
Change Acceleration Process (CAP) is a Six Sigma best practice for quickly changing the way an organization operates ... and (very importantly) making the change last. Ask any seasoned CRM project manager and he or she will tell you this can be quite challenging and increasingly so in a large organization.
CAP is a pre-defined Six Sigma process with steps for moving from the current state to an improved state by speeding up the transition state. The best practice is that a technical strategy alone is insufficient to guarantee success. Rather, it is lack of attention to cultural factors that tend to derail projects when there is a failure. Failure, for our purposes, is defined as failing to achieve the anticipated benefits of the project (i.e., the benefits that justified the project in the first place).
The RunMags change management methodology is based on the principles of CAP, but where CAP provides support for any type of change project, the RunMags change management methodology specifically support the steps in implementing RunMags in a publishing company.
Breaking down the Process
In essence, CAP is a seven step process for moving from the current state to an improved state.
1. Leading change
First and foremost, authentic, committed leadership throughout the duration of the initiative is essential for success. From a project management perspective, there is a significant risk of failure if the organization perceives a lack of leadership commitment to the initiative.
2. Creating a shared need
Before you present RunMags as a solution to employees and other key stakeholders, you must define the problems you're trying to solve. Spend time on making sure you can identify and highlight some three or so major problems each group of stakeholders are dealing with on a daily basis.
Meet with each group or stakeholders and really highlight the problems and explain that your trying to identify a solution that will make the pain go away. For the sales staff this can be double-entry of sales orders, for production it can be managing copy, for management it can be getting proper insight to run the business.
3. Shaping a vision
After a week or so, schedule another meeting with each of the stakeholder groups. This time, communicate very visually how you believe the organization should work and the tools that should support the key processes.
At this point, if you are a project manager or sponsor, you should know RunMags very well so that you understand the tools available and how they can fit into your organization's key business processes.
For each group, tell stories from the daily operation and describe in what way RunMags will help them go about things. Make sure the stories aligns with removing the key pain points in step 2.
4. Mobilizing commitment
When implementing a platform like RunMags, you can't just rely on an official commitment from people. It's not enough with them saying they will do everything in their power do help the project to success. You have to get them to really want RunMags and any new processes because the solution will remove the pain they are currently experiencing.
In almost any organization, you'll find individuals that that are pro change and those that oppose change. It doesn't matter how bad things have been in the past, anything new will just be perceived as a threat. Sales reps who have built a career on being lone wolves, keeping their clients away from the company will go to great lengths not to have to enter information in RunMags. Under-performing managers who have been able to slide by because disjointed legacy systems haven't supported honest and transparent reporting may feel that RunMags will expose them.
To combat any nay-sayers, you have to form a core leadership team with pro change individuals from different departments. This cross-functional team should not be larger than eight people in a large organization but no less than two people in a small organization. progress, procedures, | https://docs.runmags.com/en/articles/104363-runmags-change-management-methodology-and-best-practices | 2020-07-02T09:24:00 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['https://downloads.intercomcdn.com/i/o/115676938/1f8d8186c04f2c35863f65e5/CAP.png',
None], dtype=object) ] | docs.runmags.com |
Rendering and Dimensions
The MultiSelect enables you to configure its layout and the rendering of its elements.
Setting the Width of the List
To customize the width of the drop-down list and change its dimensions, use the jQuery
width() method.
<select id="multiselect"></select> <script> var multiselect = $("#multiselect").data("kendoMultiSelect"); // Set the width of the drop-down list. multiselect.list.width(400); </script>
Setting the Width of the Popup
You can enable the
popup element to automatically adjust its width according to the length of the item label it displays. When the
autoWidth option is set to
true, the popup shows the content as a single line and does not wrap it up.
<select id="multiselect" style="width: 100px;"></select> <script> $("#multiselect").kendoMultiSelect({ autoWidth: true, dataSource: { data: ["Short item", "An item with really, really long text"] } }); </script>
Accessing list Elements
The MultiSelect renders an
ID attribute that is generated from the ID of the widget and the
-list suffix. You can use the
ID to style the element or to access a specific element inside the
popup element.
If the widget has no ID, the drop-down element will have no ID either.
<select id="multiselect"></select> <script> $(document).ready(function() { $("#multiselect").kendoMultiSelect({ dataSource: ["Item1", "Item2"] }); //the DIV popup element that holds header, footer templates and the suggestion options. var popupElement = $("#multiselect-list"); console.log(popupElement); }); </script>
Focusing
Because of its complex rendering, focusing the widget by using a
label element requires additional implementation. For more information, refer to this Kendo UI Dojo snippet.
Managing Scrollable Content
By design, when the user adds an item that does not fit in the existing free space, the MultiSelect expands vertically. To limit the expansion and scrolling of the content, refer to the demo on handling scrollable MultiSelect content. | https://docs.telerik.com/kendo-ui/controls/editors/multiselect/render-dimensions | 2020-07-02T08:36:22 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.telerik.com |
.
Server architecture¶
Demos
- URLs: - - - - -
- Bedrock locales: dev repo
- Bedrock Git branch:
demo/1,
demo/2, etc.
- Bedrock Git branch: any branch named starting with
demo/
- Bedrock Git branch: master, deployed on git push
Stage
- URL:
- Bedrock locales: prod repo
- Bedrock Git branch: prod, deployed on git push with date-tag
Production
- URL:
- Bedrock locales: prod repo
- Bedrock Git branch: prod, deployed on git push with date-tag
You can check the currently deployed git commit by checking.)) | https://bedrock.readthedocs.io/en/latest/contribute.html | 2020-07-02T08:50:02 | CC-MAIN-2020-29 | 1593655878639.9 | [] | bedrock.readthedocs.io |
While:
Create a new branch folder "hello-packaging"
Set up our directory structure including the "src" and "data" folders.
Add your Copying, .desktop, .appdata.xml, and source code.
Now set up the Meson build system and translations.
Test everything! [email protected] <[email protected]>nameSection: x11Priority: extraMaintainer: Your Name <[email protected]>Build-Depends: debhelper (>= 10.5.1),gettext,libgtk-3-dev (>= 3.10),meson,valac (>= 0.28.0)Standards-Version: 4.1.1Package: com.github.yourusername.yourrepositorynameArchitecture: anyDepends: ${misc:Depends}, ${shlibs:Depends}Description: Hey young worldThis is a Hello World written in Vala using Meson build system.
Open the file called "copyright". We only need to edit what's up top:
Format:: hello-packagingSource:: src/* data/* debian/*Copyright: 2019 Your Name <[email protected]. | https://docs.elementary.io/develop/writing-apps/untitled/packaging | 2020-07-02T08:07:06 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.elementary.io |
OffCAT error - The Installed has insufficient privileges to access this directory
Also I remember a related error message that you may get specific to OffCAT installation. "The Installed has insufficient privileges to access this directory "C:\program files\Microsoft\OffCAT". The installation cannot continue. Log on as administrator or contact your system administrator".
If you attempt to install OffCAT as a non-administrative user, you will receive the following error during OffCAT setup. In such scenario to install the Office Configuration Analyzer Tool (OffCAT), please make sure that you log into your computer as a member of the local Administrators group. | https://docs.microsoft.com/en-us/archive/blogs/deva/offcat-error-the-installed-has-insufficient-privileges-to-access-this-directory | 2020-07-02T10:11:31 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Developer Chat RSS Feeds!
Many have asked me about RSS feeds for upcoming chats. Here are a few you could register for: chat RSS feeds ( Developer ) ( IT Pro ) ( Windows Server )
I'd bet that there will also be some more specific feeds available in the future if you'd like to, for example, subscribe just to C# chats. But this is pretty cool! Thanks Jana! | https://docs.microsoft.com/en-us/archive/blogs/jledgard/developer-chat-rss-feeds | 2020-07-02T10:41:01 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Creating a Launch Configuration
When you create a launch configuration, you must specify information about the EC2 instances to launch. Include the Amazon Machine Image (AMI), instance type, key pair, security groups, and block device mapping. Alternatively, you can create a launch configuration using attributes from a running EC2 instance. For more information, see Creating a Launch Configuration Using an EC2 Instance.
When you create an Auto Scaling group, you can specify a launch template, launch configuration, or an EC2 instance. We recommend that you use a launch template to ensure that you can use the latest features of Amazon EC2. For more information, see Launch Templates.
After you create a launch configuration, you can create an Auto Scaling group. For more information, see Creating an Auto Scaling Group Using a Launch Configuration.
An Auto Scaling group is associated with one launch configuration at a time, and you can't modify a launch configuration after you've created it. Therefore, if you want to change the launch configuration for an existing Auto Scaling group, you must update it with the new launch configuration. For more information, see Changing the Launch Configuration for an Auto Scaling Group.
To create a launch configuration using the console
Open the Amazon EC2 console at
.
On the navigation bar at the top of the screen, the current region is displayed. Select a region for your Auto Scaling group that meets your needs.
On the navigation pane, under AUTO SCALING, choose Launch Configurations.
On the next page, choose Create launch configuration.
On the Choose AMI page, select an AMI.
On the Choose Instance Type page, select a hardware configuration for your instance..
(Optional) For Purchasing option, you may request Spot Instances and specify the maximum price you are willing to pay per instance hour. For more information, see Launching Spot Instances in Your Auto Scaling Group.
(Optional) For IAM role, select a role to associate with the instances. For more information, see IAM Role for Applications That Run on Amazon EC2 Instances.
(Optional) By default, basic monitoring is enabled for your Auto Scaling instances. To enable detailed monitoring for your Auto Scaling instances, select Enable CloudWatch detailed monitoring.
For Advanced Details, IP Address Type, select an option. To connect to instances in a VPC, you must select an option that assigns a public IP address. If you want to connect to your instances.
For Select an existing key pair or create a new key pair, select one of the listed options. Select the acknowledgment check box, and then choose Create launch configuration.
Warning
If you need to connect to your instance, do not select Proceed without a key pair.
To create a launch configuration using the command line
You can use one of the following commands:
create-launch-configuration (AWS CLI)
New-ASLaunchConfiguration (AWS Tools for Windows PowerShell) | https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-config.html | 2020-07-02T10:33:42 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.aws.amazon.com |
Message queue push to remote HTTPS endpoint
If your application pushes messages to an HTTPS endpoint on your systems, the configuration is specific to your application. We’ll provide details on how to configure the HTTPS endpoint.
Options include:
- Sending messages immediately they’re generated
- Sending batches of messages on a schedule
- HTTP Basic authentication
- SSL client certificates
Note that only
https: endpoints are supported. Haplo does not support unencrypted protocols.
Reliability
When the remote endpoint returns a
200 response, the messages will be marked as sent.
Any other status code or transport error will not mark them as sent, and the messages will be resent at some point in the future.
Admin user interface
The admin user interface will include a button to send all messages to your endpoint, rather than waiting for a schedule. This is useful when you have marked a message for resending. | https://docs.haplo.org/dev/message-queue/push | 2020-07-02T09:43:37 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.haplo.org |
AX Application Explorer in Visual Studio
Visual Studio Tools for Microsoft Dynamics AX. A set of integration components for Visual Studio, that transform Visual Studio in an AX centric, model based, highly integrated, repository based environment.
The first step is to surface the AX metadata in Visual Studio:
The AX Application Explorer, displays the AOT inside of Visual Studio. Think of it as Server Explorer meets AOT. It surfaces the essential AX Application elements for the purpose of easy integration with Visual Studio capabilities.
To be continued… | https://docs.microsoft.com/en-us/archive/blogs/axtools/ax-application-explorer-in-visual-studio | 2020-07-02T09:27:38 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Welcome to Nerdville... Population. Us. (posted by Arturo)
ahh.. TechEd 2005. Another conference under out belts. Our bellies full of free granola bars and bananas, and our brains, brimming with facts and information that we can’t wait put to use. But today, I want to reflect on another interesting aspect of this past week that some of us may have overlooked. One of the greatest things about conferences like this is the fact that it is one of the rare places where any stranger asking what you do... is legitimately interested in precisely that.
So often as a young professionals in the technology industry, we rarely get to answer that question in social situations with all the gory details involved. But not at TechEd. There everyone (well most everyone :) ) is just as excited about CLR Frameworks and SQL servers as you are. Where else can you chat to a stranger about migration strategies and unabashedly vocalize the fact that when you get back to the office, the first thing you're going to do is upgrade to Yukon and rewrite your co-workers favorite string parsing T-SQL function in C#. We openly have these conversations without fear that anyone sharing the table will feel the urge to stand up to find more interesting company elsewhere. Far from it, you often get the smile and the nod of someone who knows precisely what it is you’re talking about.
Don’t get me wrong, I’m a huge fan of social situations involving sitting at a bar and chatting about how my ears are still ringing from the show we just caught at Neumo's. The beauty, boys and girls, lays in the delicate balance that TechEd and conferences like it allow us to strike in our lives.
Viva La Zen
-arturo | https://docs.microsoft.com/en-us/archive/blogs/dditweb/welcome-to-nerdville-population-us-posted-by-arturo | 2020-07-02T10:30:56 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Use an
External Dynamic List in a URL Filtering Profile
An
external dynamic listis a text file that is hosted on an external web server. You can use this list to import URLs and enforce policy on these URLs. When you update the list on the web server, the firewall retrieves the changes and applies policy to the modified list without requiring a commit on the firewall.
For more information, see External Dynamic List and Enforce Policy on Entries in an External Dynamic List.
- Create the external dynamic list for URLs and host it on a web server.Create a text file and enter the URLs in the file; each URL must be on a separate line. For example:financialtimes.co.in *.example.com/* abc?*/abc.com *&*.netSee URL Category Exception Lists for formatting guidelines.
- Configure the firewall to access.
- In the Type drop-down, selectURL List. Ensure that the list does not include IP addresses or domain names; the firewall skips non-URL entries.
- Enter theSourcefor the list you just created on the web server. The source must include the full path to access the list. For example,.
- ClickTest Source URLto verify that the firewall can connect to the web server.If the web server is unreachable after the connection is established, the firewall uses the last successfully retrieved list for enforcing policy until the connection is restored with the web server.
- (Optional) Specify theRepeatfrequency at which the firewall retrieves the list. By default, the firewall retrieves the list once every hour.
- ClickOK.
- URL Category Exception.
- Attempt to access a URL that is included in the external dynamic platform.Use the following CLI command on a firewall to review the details for a list.request system external-list show type url <list_name>For example:request system external-list show type url EBL_ISAC_Alert_List
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/url-filtering/use-an-external-dynamic-list-in-a-url-filtering-profile.html | 2020-07-02T10:19:28 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.paloaltonetworks.com |
Installing on Linux using OTP releases¶
Pre-requisites¶
- A machine running Linux with GNU (e.g. Debian, Ubuntu) or musl (e.g. Alpine) libc and
x86_64,
aarch64or
armv7lCPU, you have root access to. If you are not sure if it's compatible see Detecting flavour section below
- A (sub)domain pointed to the machine
You will be running commands as root. If you aren't root already, please elevate your priviledges by executing
sudo su/
su.
While in theory OTP releases are possbile to install on any compatible machine, for the sake of simplicity this guide focuses only on Debian/Ubuntu and Alpine.
Detecting flavour¶
Paste the following into the shell:&2;fi;if getconf GNU_LIBC_VERSION>/dev/null;then&2;fi;echo "$arch$libc_postfix"
If your platform is supported the output will contain the flavour string, you will need it later. If not, this just means that we don't build releases for your platform, you can still try installing from source.
Installing the required packages¶
Other than things bundled in the OTP release Pleroma depends on:
- curl (to download the release build)
- unzip (needed to unpack release builds)
- ncurses (ERTS won't run without it)
- PostgreSQL (also utilizes extensions in postgresql-contrib)
- nginx (could be swapped with another reverse proxy but this guide covers only it)
- certbot (for Let's Encrypt certificates, could be swapped with another ACME client, but this guide covers only it)
echo "" >> /etc/apk/repositories apk update apk add curl unzip ncurses postgresql postgresql-contrib nginx certbot
apt install curl unzip libncurses5 postgresql postgresql-contrib nginx certbot
Setup¶
Configuring PostgreSQL¶
(Optional) Installing RUM indexes¶
Warning
It is recommended to use PostgreSQL v11 or newer. We have seen some minor issues with lower PostgreSQL versions.
RUM indexes are an alternative indexing scheme that is not included in PostgreSQL by default. You can read more about them on the Configuration page. They are completely optional and most of the time are not worth it, especially if you are running a single user instance (unless you absolutely need ordered search results).
apk add git build-base postgresql-dev git clone /tmp/rum cd /tmp/rum make USE_PGXS=1 make USE_PGXS=1 install cd rm -r /tmp/rum
# Available only on Buster/19.04 apt install postgresql-11-rum
(Optional) Performance configuration¶
For optimal performance, you may use PGTune, don't forget to restart postgresql after editing the configuration
rc-service postgresql restart
systemctl restart postgresql
If you are using PostgreSQL 12 or higher, add this to your Ecto database configuration
prepare: :named, parameters: [ plan_cache_mode: "force_custom_plan" ]
Installing Pleroma¶
# Create a Pleroma user adduser --system --shell /bin/false --home /opt/pleroma pleroma # Set the flavour environment variable to the string you got in Detecting flavour section. # For example if the flavour is `amd64-musl` the command will be export FLAVOUR="amd64-musl" # Clone the release build into a temporary directory and unpack it su pleroma -s $SHELL -lc " curl '' -o /tmp/pleroma.zip unzip /tmp/pleroma.zip -d /tmp/ " # Move the release to the home directory and delete temporary files su pleroma -s $SHELL -lc " mv /tmp/release/* /opt/pleroma rmdir /tmp/release rm /tmp/pleroma.zip " # Create uploads directory and set proper permissions (skip if planning to use a remote uploader) # Note: It does not have to be `/var/lib/pleroma/uploads`, the config generator will ask about the upload directory later mkdir -p /var/lib/pleroma/uploads chown -R pleroma /var/lib/pleroma # Create custom public files directory (custom emojis, frontend bundle overrides, robots.txt, etc.) # Note: It does not have to be `/var/lib/pleroma/static`, the config generator will ask about the custom public files directory later mkdir -p /var/lib/pleroma/static chown -R pleroma /var/lib/pleroma # Create a config directory mkdir -p /etc/pleroma chown -R pleroma /etc/pleroma # Run the config generator su pleroma -s $SHELL -lc "./bin/pleroma_ctl instance gen --output /etc/pleroma/config.exs --output-psql /tmp/setup_db.psql" # Create the postgres database su postgres -s $SHELL -lc "psql -f /tmp/setup_db.psql" # Create the database schema su pleroma -s $SHELL -lc "./bin/pleroma_ctl migrate" # If you have installed RUM indexes uncommend and run # su pleroma -s $SHELL -lc "./bin/pleroma_ctl migrate --migrations-path priv/repo/optional_migrations/rum_indexing/" # nginx and getting Let's Encrypt SSL certificaties¶
Get a Let's Encrypt certificate¶
certbot certonly --standalone --preferred-challenges http -d yourinstance.tld
Copy Pleroma nginx configuration to the nginx folder¶
The location of nginx configs is dependent on the distro
cp /opt/pleroma/installation/pleroma.nginx /etc/nginx/conf.d/pleroma.conf
cp /opt/pleroma/installation/pleroma.nginx /etc/nginx/sites-available/pleroma.conf ln -s /etc/nginx/sites-available/pleroma.conf /etc/nginx/sites-enabled/pleroma.conf
If your distro does not have either of those you can append
include /etc/nginx/pleroma.conf to the end of the http section in /etc/nginx/nginx.conf and
cp /opt/pleroma/installation/pleroma.nginx /etc/nginx/pleroma.conf
Edit the nginx config¶
# Replace example.tld with your (sub)domain $EDITOR path-to-nginx-config # Verify that the config is valid nginx -t
Start nginx¶
rc-service nginx start
systemctl start nginx
At this point if you open your (sub)domain in a browser you should see a 502 error, that's because Pleroma is not started yet.
Setting up a system service¶
# Copy the service into a proper directory cp /opt/pleroma/installation/init.d/pleroma /etc/init.d/pleroma # Start pleroma and enable it on boot rc-service pleroma start rc-update add pleroma
# Copy the service into a proper directory cp /opt/pleroma/installation/pleroma.service /etc/systemd/system/pleroma.service # Start pleroma and enable it on boot systemctl start pleroma systemctl enable pleroma
If everything worked, you should see Pleroma-FE when visiting your domain. If that didn't happen, try reviewing the installation steps, starting Pleroma in the foreground and seeing if there are any errrors.
Still doesn't work? Feel free to contact us on #pleroma on freenode or via matrix at, you can also file an issue on our Gitlab
Post installation¶
Setting up auto-renew of the Let's Encrypt certificate¶
# Create the directory for webroot challenges mkdir -p /var/lib/letsencrypt # Uncomment the webroot method $EDITOR path-to-nginx-config # Verify that the config is valid nginx -t
# Restart nginx rc-service nginx restart # Start the cron daemon and make it start on boot rc-service crond start rc-update add crond # Ensure the webroot menthod and post hook is working certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --dry-run --post-hook 'rc-service nginx reload' # Add it to the daily cron echo '#!/bin/sh certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --post-hook "rc-service nginx reload" ' > /etc/periodic/daily/renew-pleroma-cert chmod +x /etc/periodic/daily/renew-pleroma-cert # If everything worked the output should contain /etc/cron.daily/renew-pleroma-cert run-parts --test /etc/periodic/daily
# Restart nginx systemctl restart nginx # Ensure the webroot menthod and post hook is working certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --dry-run --post-hook 'systemctl reload nginx' # Add it to the daily cron echo '#!/bin/sh certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --post-hook "systemctl reload nginx" ' > /etc/cron.daily/renew-pleroma-cert chmod +x /etc/cron.daily/renew-pleroma-cert # If everything worked the output should contain /etc/cron.daily/renew-pleroma-cert run-parts --test /etc/cron.daily
Create your first user and set as admin¶
cd /opt/pleroma/bin su pleroma -s $SHELL -lc "./bin/pleroma_ctl user new joeuser [email protected] --admin"
Further reading¶
Questions¶
Questions about the installation or didn’t it work as it should be, ask in #pleroma:matrix.org or IRC Channel #pleroma on Freenode. | https://docs.pleroma.social/backend/installation/otp_en/ | 2020-07-02T09:17:32 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.pleroma.social |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.