content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Per-Product Commission For Shopify and Bigcommerce users, we now offer the ability to seamlessly track commissions from individual products within purchases. This feature will allow you to specify which products create commissions, the amount of commission per product, and exclude certain products from commission plans. Setup To set up per product commissions, simply follow the process for setting up a normal commission plan. Select “Line Item” for Payment For, fill out the Payment Type and Amount as normal and add the SKUs for the products that you would like the commission to apply to. If you’d like to specifically exclude certain SKUs and provide commissions on all others, simply select the “Exclude these SKUs” box. To ensure successful product tracking, make sure to set up your SKUs for each individual product in Shopify or Bigcommerce as shown below. Product Details Per product support allows you to now see the individual items purchased on any purchases detail page. You’re able to see the SKU, product description, quantity, and price associated with individual products in an order.
http://docs.leaddyno.com/per-product-support.html
2017-02-19T14:14:19
CC-MAIN-2017-09
1487501169776.21
[array(['/img/Per-Product-Screen.png', 'Per Product Screen'], dtype=object) array(['/img/Shopify-SKU.png', 'Shopify SKU'], dtype=object) array(['/img/Purchases-Per-Product.png', 'Per Product Page'], dtype=object)]
docs.leaddyno.com
Architect, layouts control the size and position of the components within an application. Configuring a layout on each container lets you manage how that container's children are rendered. The layout for the top-level (parent) container determines the size and position configuration options that can be set on its child components. To set the layout for a container, use either the container's Flyout Config button (the gear-shaped button to the right of the component as displayed in the Canvas) or the component's layout property in Config. Ext JS provides a number of basic container layouts, which you can select and configure using Architect. The modern toolkit supports a subset of the classic layouts appropriate for mobile UI design. Some layouts support specific, commonly used presentation models (such as accordions and cards) while others provide more general-purpose models (such as HBox and VBox that arrange the child components either horizontally or vertically) that can be used for a variety of applications. The following table summarizes the layout options supported for Ext JS: To get you started, the end of this guide includes three examples that illustrate how to use layouts. You can use the Ext JS Layout Browser to get a more complete picture of how the various layouts work. Although the Layout Browser is written for Ext JS, the information is also valid for the subset of layouts that are supported for the modern toolkit. The examples at the end of the guide use the Ext JS framework. The following are the basic Ext JS layouts you can use to start configuring View components. supported by both toolkits. For general-purpose containers such as a Panel, using the Auto layout means child components are rendered sequentially. Some containers are automatically configured to use a layout other than the default Auto. For example, TabPanel defaults to the Card layout and Toolbar defaults to the HBox layout. supported only by classic Arranges components using explicit x/y positions relative to the container. This enables explicit repositioning and resizing of components within the container, providing precise control. Absolute-positioned components remain fixed even if their parent container is resized. Architect displays a grid within a container that uses Absolute layout. By default, components snap to the grid as they are repositioned. Clicking the container's Flyout Config button enables resizing or disabling the grid. The grid is only displayed as a layout guide in the Design view. It is not visible when the component is rendered. supported only by classic Arranges panel components in a vertical stack where only one panel is expanded at a time. Only panels (including sub-classes thereof, for example, TabPanel) can be added to a container that uses the Accordion layout. supported only by classic Arranges components relative to the sides of the container. Specify the width and height of child components as a percentage of the container or specify offsets from the right and bottom edges of the container. If the container is resized, the relative percentages or offsets are maintained. supported only by classic Arranges panel components in a multipanel layout according to one of five regions: A container that uses the Border layout must have a child assigned to the Center region. The Center region is automatically sized to fit the available space. Resize the North, South, East, and West regions on the canvas by clicking and dragging the right or bottom edge of the panel. Make any of the regions in a Border layout collapsible by enabling the collapsible attribute. When rendered, child panels automatically resize when the container is resized. supported by both both toolkits Allows users to step through a series of components, one at a time, by arranging child components so that only one can be visible at a time, filling the entire area of the container. Specify the component to make visible with the setActiveItem method. This behavior is typically attached to a UI navigation element, such as Previous and Next buttons in the footer of the container. It is commonly used to create wizards; see an example later in this guide. supported only by classic Arranges components in a multi-column layout. The width of each column can be specified either as a percentage (column width) or an absolute pixel width (width). The column height varies based on the contents. Enable autoScroll if the application data requires viewing column contents that exceed the container height. supported by both toolkits. Expands a single child component to fill the available space. For example, use this layout to create a dialog box that contains a single TabPanel. If the container is a type of panel component, you can also add and dock additional child components, such as a Toolbar, to the top, left, right, or bottom of the panel. Supported only by classic Arranges components in an HTML table. Specify the number of columns in the table. Architect enables creation of complex layouts by specifying the child component rowspan and colspan attributes. supported by both toolkits. Arranges the child components horizontally. Setting the alignment of the container to stretch causes the child components to fill the available vertical space. Setting the flex attribute of the child components controls the proportion of the horizontal space each component fills. supported by both toolkits. Arranges the child components vertically. Setting the alignment of the container to stretch causes the child components to fill the available horizontal space. Setting the flex attribute of the child components controls the proportion of the vertical space each component fills. When containers are nested, the layout configuration for the parent container manages the layout of whatever child components it contains, including other containers. The layout does not affect the contents of any child containers, only the containers themselves. This allows for nested, complex layouts to be created. With HBox and VBox layouts, child components can be resized to fit the available space in a container using the flex attribute. The flex attribute is a numerical value that represents the proportion of the available space that will be allotted to a component. Set the attribute to any floating point value, including whole numbers and fractions. Consider a component with three sub-panels in which flex is set to 1 for Panel 1 and Panel 3 and flex is set to 2 for Panel 2. The available space is divided into four equal portions (the sum of the flex values), and Panel 1 and Panel 3 each get one portion while Panel 2 gets two, as shown here: Setting an absolute width or height for some components and a flex value for others causes the absolute sizes to be subtracted from the total available space and the remaining space to gets two-thirds of the space and Panel 3 gets one-third of the space, as shown here: If neither an absolute size nor a flex value are specified for a component, the framework checks to see if the size is defined in the application's CSS. If no size is specified in the CSS, the framework assigns the minimum necessary space to the item. If stretch is specified as the alignment option for a container that uses the HBox or VBox layout, its sub-components are automatically stretched to horizontally or vertically fit the size of the container. When HBox is used, sub-components are stretched vertically. With VBox, sub-components are stretched horizontally. For example, when stretch is set on a panel that uses HBox, each of the subpanels is automatically stretched to fill the available vertical space. The stretchmax option works just like stretch, except it stretches sub-components to the size of the tallest or widest component, rather than the size of the container, like this. If a component uses the Card layout, its children are visible one at a time, making it an ideal option for creating a wizard. This example shows how to create a three-step registration wizard using the Card layout. The basic approach is setActiveItemfunction to display the appropriate panel when the user clicks a navigation button within the Window. Start by adding a Window container to Architect, either through the Toolbox or the Canvas. A Window can only be added as a top-level component; it cannot be added as a child of an existing component. Click the Window's Flyout Config button and select card from the layout menu to apply the Card layout to the Window. Also, name the wizard by setting its title property in Config to a title of your choice, or by double-clicking its title in the Canvas and over-writing MyWindow there. Next, drag a Panel container onto the Window; this subpanel is used to create the first step in the wizard. Panels in a Card layout are numbered in the order they are added to the container, starting with item 0. By default, item 0 is set as the active item. To change the active item within Architect, select the Window and set the activeItem attribute to the panel to be made active. Add two more panels to the Window for the second and third steps of the Wizard. Either drag them onto the title bar of the Window on the Canvas or onto the Window in the Inspector Views node. As sub-panels are added, hide their title bars. First select each sub-panel in the Inspector. Then, either scroll through Config to the title attribute under Ext.panel.Panel, or type title in the filter field. Then, either select the text in the field ( My Panel) next to the title attribute and erase it, or click the x just to the right of the title Value field. The wizard needs navigation buttons to move from one step to the next. Add them by adding a Toolbar component from Toolbox to the top-level Window and docking it at the bottom of the Window by choosing bottom in the Flyout Config menu. Then, add four buttons to the Toolbar and name the buttons Cancel, Next, and Submit. Double-click the first button label on the Canvas and type over to name it, and use tab to move to the next button in the Toolbar until you’ve named each button. The buttons need a little more work to make them more usable by both the user and the developer. First, align the buttons by adding a Fill component between the Cancel and Previous button and a Spacer of width 20 between the Next and Submit buttons. Then, using Config, scroll down to the itemid attributes (under Ext.AbstractComponent) for each button and set each to a name that can be easily referenced in the navigation handler. For example, set the itemid attributes to cancelBtn, prevBtn, nextBtn, and submitBtn respectively. Now add content for the wizard to each card. Since the wizard needs to gather user input, each card should be a Form Panel rather than a Panel. Fortunately, in Architect it is easy to change one type of component into another. To change the Panels into Form Panels, right-click each one and choose Transform, then Ext.form.Panel. For this example, we built a registration wizard for a series of horsemanship clinics with the three cards shown below. By default, Card 0 is the active card. To add form fields to Card 1 and Card 2, select the Window and set its activeItem attribute to the panel you want to work on. The HBox layout enables horizontal arrangement of sub-components, while VBox lays out sub-components vertically. These general-purpose layouts provide a lot of control over how components are positioned without having to resort to using absolute positioning. This example arranges several related checkboxes in multiple columns to conserve space. Start with a Form Panel container, then add a FieldSet container to the Form Panel parent for the checkboxes. Set the layout of the FieldSet to hbox. Next, add a Container component to the FieldSet for each column. Select each Container in the Inspector, set flex to 1, and set the height to accommodate all the checkboxes that will be added. For example, 60 pixels accommodates three rows of checkboxes. It's easier to select the column containers in the Components tab. When they are first added to the FieldSet, their height defaults to 2 pixels, making them hard to select in the Canvas -- a good reason to select components in the Inspector. Finally, add checkboxes to each column container and set the boxLabel attribute for each. To specify margins around the checkboxes, change the layout of the column containers from auto to vbox, and then set the margin attribute for each individual checkbox. To create applications that need the entire content area in a browser window (that is, the entire browser viewport), use the Viewport container. A Viewport usually uses the Border layout to arrange a collection of sub-panels according to the regions North, South, East, West, or Center. With the Border layout, a Panel sub-component must be assigned to the Center region, which is automatically sized to fit the available space. Let’s step through an example that takes this approach: a viewer students use to register for classes. Start building the viewer by adding a Viewport to a new Architect project. A Viewport can only be added as a top-level component; it cannot be added as a child of an existing component. Select the Border layout by clicking the Viewport Flyout Config button and selecting border from the layout menu. Next, drag a Panel into the Viewport. Because this is the only component currently in the layout, it is automatically assigned to the Center region. This Panel will display information about people who have signed up for one of our classes, so name the Panel Student Information. Add a TreePanel to the Viewport by selecting the Viewport and double-clicking Tree Panel in the Toolbox to add it as a child of the Viewport. Alternately, you can drag the Tree Panel onto the Viewport in the Inspector. Architect automatically assigns the Tree Panel to the West region. Students will use the tree to navigate through the classes they can take, so name it Class List. Note: It is possible to change the region to which a sub-component is assigned. To do so, set its region attribute in Config. The next step would be to configure the Class List tree and the Student Information Panel to display content about classes. A template could be used to display data for individual students in the Student Information Panel.
http://docs.sencha.com/architect/4.1/guides/user_interface/layouts.html
2017-02-19T14:14:11
CC-MAIN-2017-09
1487501169776.21
[]
docs.sencha.com
Migrating an N-Tier Application Using Images You can import existing application images running in a cloud and use them as custom images for a CloudCenter tenant. This example describes a Siebel CRM application that uses an Oracle database with data running at the customer's location. The data is being migrated from a dedicated on-premises server to a cloud. To migrate an N-tier application with data, using images, follow this process: Icon You need CloudCenter administrator privileges to perform this procedure. - Back up the data running on the Siebel CRM application. - Create an image for each tier in the application: - The database with data. - The Siebel CRM application with all dependencies. - Identify the relevant Key Components, Parameters and Macros , and Configuration Files for this application. - Log into CCM UI and access Admin > Image: - Click the Add New link to add a new image. - In the Add a New Image page, fill in the fields to create a logical entry for your image in CloudCenter: - OS Type: The OS on which the image is based. For Siebel CRM, the OS is Linux. - Number of Network Interfaces: The base image dictates the number of NICs. See IP Allocation Mode for additional context. - Enabled: Check this box if you want to use this image. - Map the logical CloudCenter Images to your actual image: - In the Image Mappings section, select the Cloud Type for the image from the dropdown list. - Enter the Image ID associated with the Cloud Type. - Select all applicable Instance Types supported for this image. You can also override the image cost by not modifying the cost. - After adding the mapping, you can view the mapped image by clicking the image name. You can edit/convert/delete images after creating them. - Repeat the step for each image associated with each tier in this application (for example, the database image, SiebelDB). - Users will see the newly added images during their application deployment process. - Configuration Files (or properties file) to include CloudCenter-defined system macros to automatically plug in the appropriate values for parameters defined in the configuration file. Alternately, you can pass these as arguments to install or configuration scripts. - After modifying the configuration files with system macros, create a .war application package and include the updated configuration file. - Upload the application data (packages, configuration files, backup data, SQL script, and other scripts) to your secure, shared directory in the Artifact Repository: - Create an application directory under /storage/app/<application name>. - Upload the components to the Application Repository to the newly created directory. - Define the Application Profile. For the Siebel DB application, select the N-Tier Execution App Profile: For each tier in the application, associate System Tags and if required, configure the Deployment Environments, the target cloud, the Prevent Termination behavior, the Aging Policy, the Metadata for Custom Properties, the Pricing, the Instance Types, the On-Demand Instance details, the storage (Shared Repositories) details, and the Scaling Policy details – if applicable to your deployment. - If the application requires certain parameters to be defined or overridden (administrator, username, and password), include them in the Topology Modeler's Global Parameters tab. - Define the application architecture and services using the Topology Modeler. For each tier, use the Properties panel to provide additional details. For example: - Siebel Database tier: Parameters, username, password, the path for the database script that has the backup data from a previous environment, and so forth. - Siebel CRM tier: Define the path for the application binary file and the configuration file. - Similarly, provide the dependent details for other applicable tiers as well. - Save the N-Tier app profile as Siebel CRM application. Access your new application from the Apps tab and verify your changes for each tier. The application is now ready for deployment. - Deploy the Application: - Depending on the hardware requirement and application-specific requirements, select the cloud and instance types from the displayed list. - Select the required Instance Types . - (access URL) to open the IP address for this application. You may need to specify the full path in the context of the application in the Topology Builder's Basic Information pane. - Use the full path URL to access the migrated Siebel CRM application. - No labels
http://docs.cliqr.com/display/CCD46/Migrating+Applications+Using+Images
2017-02-19T14:12:53
CC-MAIN-2017-09
1487501169776.21
[]
docs.cliqr.com
score; Returns: Real This variable is global in scope and is used to hold a numeric value which is usually used for the game score. Bear in mind this is not necessarily always the case - just because it's called "score" doesn't mean it has to be used for the score and it can be used to hold any integer value you choose. if place_meeting(x, y, obj_Bullet) { score += 10; instance_destroy(); } The above code checks the current instance to see if there is a collision with the instance of the object indexed in "obj_Bullet", and if there is, it adds 10 to the global score and then destroys itself.
http://docs.yoyogames.com/source/dadiospice/002_reference/001_gml%20language%20overview/variables/score.html
2017-02-19T14:16:06
CC-MAIN-2017-09
1487501169776.21
[]
docs.yoyogames.com
Use the solution described below if your secondary NSX Manager gets stuck in transit mode as described in the problem. The issue occurs when you restore the backup on primary NSX Manager when the secondary NSX Manager is in transit mode. Procedure - Log in to the vCenter linked to the primary NSX Manager using the vSphere Web Client. - Navigate to Installation, and then select Management tab. > - Select the secondary NSX Manager that you want to delete and click Actions, and then click Remove Secondary NSX Manager. A confirmation dialog box appears. - Select the Perform operation even if NSX Manager is inaccessible check box. - Click OK. The secondary NSX Manager gets deleted from the primary database. - Add the secondary NSX Manager again. What to do next For more information about adding secondary NSX Manager, refer to NSX Installation Guide.
https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/com.vmware.nsx.troubleshooting.doc/GUID-BD1104BD-4F90-4CCB-938B-2EB50378A69B.html
2017-09-19T15:34:40
CC-MAIN-2017-39
1505818685850.32
[]
docs.vmware.com
To provision machines in a cross vCenter NSX environment when using NSX universal objects, you must provision to a vCenter in which the NSX compute manager has the primary role.. The primary NSX manager can create universal objects, such as universal logical switches. These objects are synchronized to the secondary NSX managers. You can view these objects from the secondary NSX managers, but you cannot edit them there. You must use the primary NSX manager to manage universal objects. The primary NSX manager can be used to configure any of the secondary NSX managers in the environment. For more information about the NSX cross-vCenter environment, see Overview of Cross-vCenter Networking and Security in the NSX Administration Guide in the NSX product documentation. For a vSphere (vCenter) endpoint that is associated to the NSX endpoint of a primary NSX manager, vRealize Automation supports NSX local objects, such as local logical switches, local edge gateways, and local load balancers, security groups, and security tags. It also supports NAT one-to-one and one-to-many networks with universal transport zone, routed networks with universal transport zone and universal distributed logical routers (DLRs), and a load balancer with any type of network. vRealize Automation does not support NSX existing and on-demand universal security groups or tags. If you connect a vSphere (vCenter) endpoint to a corresponding secondary NSX manager endpoint, you can only provision and use local objects. You can only associate an NSX endpoint to one vSphere endpoint. This association constraint means that you cannot provision a universal on-demand network and attach it to vSphere machines that are provisioned on different vCenters. vRealize Automation can consume an NSX universal logical switch as an external network. If a universal switch exists, it is data-collected and then attached to or consumed by each machine in the deployment. Provisioning an on-demand network to a universal transport zone can create a new universal logical switch. Provisioning an on-demand network to a universal transport zone on the primary NSX manager creates a universal logical switch. Provisioning an on-demand network to a universal transport zone on a secondary NSX manager fails, as NSX cannot create a universal logical switch on a secondary NSX manager. See the VMware Knowledge Base article Deployment of vRealize Automation blueprints with NSX objects fail (2147240) at for more information about NSX universal objects.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-A5629CA6-FB27-4CF5-B845-B1C6190B75EF.html
2017-09-19T15:31:13
CC-MAIN-2017-39
1505818685850.32
[]
docs.vmware.com
Power management is a tricky thing to understand - and even harder thing to implement properly. The words and states that is used to describe the hardware is not the same as the states used to describe the software. The software/hardware interaction is normally defined by the Advanced Configuration and Power Interface (ACPI). ACPI defines common interfaces for hardware recognition, motherboard and device configuration and power management, and is supported by the Linux kernel natively. The ACPI specification promotes the concept that systems should manage energy consumption by transitioning unused devices into lower power states including placing the entire system in a low-power state (sleeping state) when possible. A system is broken down into classes: While the Global System is working (on), the processor can be in any number of states, from executing instructions at full rate (G0/C0/P0), to executing instructions at a reduced rate (G0/C0/P4), to waiting for interrupts to occur in a low power mode (G0/C1). Device states are independent of the system, and are states of particular devices. Device states apply to any device on any bus. The Blackfin processor provides several operating modes, each with different performance/power/latency profiles. In addition to overall clock management and gating clocks to each of the peripherals, the processor provides the control functions to dynamically alter the processor core supply voltage to further reduce power dissipation. The power states available to the hardware are: IDLEinstruction. The processor remains in the Idle state until a peripheral or external device, generates an interrupt that requires servicing. The kernel IDLE loop is the IDLEstate, as it saves power, but has zero overhead in responding to an interrupt. Not all hardware states are available in the Linux kernel. A simple hardware workaround on the SCKE Strobe made the issue go away: Add a 6.8k Ohm resistor between SCKE (J2-81) and GND (J2-87). The kernel supports three power management states generically, though each is dependent on platform support code to implement the low-level details for each state. Blackfin Linux currently offers Standby. “standby” This state offers high power savings, while providing a very low-latency transition back to a working system. No operating state is lost (the CPU retains power), so the system easily starts up again where it left off. From a Blackfin hardware perspective - the processor is in Full On, but the Clocks are slowed down to consume almost no power. We try to put devices in a low-power state equivalent to D1, which also offers low power savings, but low resume latency. Not all devices support D1, and those that don't are left on. A transition from Standby to the On state should take only a few milliseconds. Linux Kernel Configuration -> Power management options -> [*] Power Management support [ ] Legacy Power Management API (DEPRECATED) [ ] Power Management Debug Support [*] Suspend to RAM and standby Standby Power Saving Mode (Sleep Deeper) ---> [*] Allow Wakeup from Standby by GPIO (2) GPIO number GPIO Polarity (Active High) ---> --- Possible Suspend Mem / Hibernate Wake-Up Sources [ ] Allow Wake-Up from on-chip PHY or PH6 GP There are two different options controlling the Wakeup Wakeup Events: For dynamic power management, any of the peripherals can be configured to wake up the core from its idled state to process the interrupt and resume form standby, simply by enabling the appropriate bit in the system interrupt wakeup-enable register (refer to Hardware Reference Manual SIC_IWR). If a peripheral interrupt source is enabled in SIC_IWR and the core is idled, the interrupt causes the DPMC to initiate the core wakeup sequence in order to process the interrupt. The linux kernel API provides these three functions to enable or disable wakeup capabilities of interrupts: int set_irq_wake(irq, state); int disable_irq_wake(unsigned int irq) file: include/linux/interrupt.h scm failed with exit code 1: file does not exist in git int enable_irq_wake(unsigned int irq) file: include/linux/interrupt.h scm failed with exit code 1: file does not exist in git Example: Following patch enables irq wake for all gpio-keys push buttons. Index: drivers/input/keyboard/gpio_keys.c =================================================================== --- drivers/input/keyboard/gpio_keys.c (revision 4154) +++ drivers/input/keyboard/gpio_keys.c (working copy) @@ -100,7 +100,7 @@ irq, error); goto fail; } - + enable_irq_wake(irq); input_set_capability(input, type, button->code); } In current kernel versions this feature has been added to the gpio-keys driver This feature can be enabled by: root:/sys/devices/platform/gpio-keys.0/power> echo enabled > wakeup This option adds some extra code that allows specifying any Blackfin GPIO to be configured as Wakeup Strobe. There is an alternative Blackfin specific API for GPIO wakeups: This API allows GPIO wakeups without using the Linux interrupt API. It also allows configuring a Wakeup as EDGE or Both EDGE sensitive while the Linux kernel interrupt is configured level sensitive. #define PM_WAKE_RISING 0x1 #define PM_WAKE_FALLING 0x2 #define PM_WAKE_HIGH 0x4 #define PM_WAKE_LOW 0x8 #define PM_WAKE_BOTH_EDGES (PM_WAKE_RISING | PM_WAKE_FALLING) #define PM_WAKE_IGNORE 0xF0 int gpio_pm_wakeup_request(unsigned gpio, unsigned char type); void gpio_pm_wakeup_free(unsigned gpio); The table below shows the HIBERNATE and DEEP SLEEP wake-up sources for the BF60x. [*] Suspend to RAM and standby │ │ │ │ [ ] Run-time PM core functionality │ │ │ │ [ ] Power Management Debug Support │ │ │ │ *** Possible Suspend Mem / Hibernate Wake-Up Sources *** │ │ │ │ [ ] Allow Wake-Up from PA15 │ │ │ │ [ ] Allow Wake-Up from PB15 │ │ │ │ [ ] Allow Wake-Up from PC15 │ │ │ │ [ ] Allow Wake-Up from PD06(ETH0_PHYINT) │ │ │ │ [*] Allow Wake-Up from PE12(ETH1_PHYINT, PUSH BUTTON) │ │ │ │ (1) Wake-up priority │ │ │ │ [ ] Allow Wake-Up from PG04(CAN0_RX) │ │ │ │ [ ] Allow Wake-Up from PG13 │ │ │ │ [ ] Allow Wake-Up from (USB) │ │ The power management subsystem provides a unified sysfs interface to userspace, regardless of what architecture or platform one is running. The interface exists in /sys/power/ directory (assuming sysfs is mounted at /sys). /sys/power/state controls system power state. Reading from this file returns what states are supported, which is hard-coded to 'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk' (Suspend-to-Disk). Blackfin Linux supports: Writing to this file one of those strings causes the system to transition into that state. Please see the file Documentation/power/states.txt for a description of each of those states. root:~> echo standby > standby wakeup from "standby" at Thu Jan 1 01:45:31 1970 Syncing filesystems ... done. Freezing user space processes ... (elapsed 0.00 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done. Suspending console(s) Restarting tasks ... done. root:/> root:~> echo mem > mem wakeup from "mem" at Thu Jan 1 01:45:31 1970 Syncing filesystems ... done. Freezing user space processes ... (elapsed 0.00 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done. Suspending console(s) Restarting tasks ... done. root:/> #include <stdio.h> #include <getopt.h> #include <fcntl.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <errno.h> static void suspend_system(const char *suspend) { char buf[20]; int f = open("/sys/power/state", O_WRONLY); int len; ssize_t n; if (f < 0) { perror("open /sys/power/state"); return; } len = sprintf(buf, "%s\n", suspend) - 1; len = strlen(buf); n = write(f, buf, len); /* this executes after wake from suspend */ if (n < 0) perror("write /sys/power/state"); else if (n != len) fprintf(stderr, "short write to %s\n", "/sys/power/state"); close(f); } int main(int argc, char **argv) { static char *suspend = "standby"; printf("Going into %s ...\n",suspend); suspend_system(suspend); printf("Awakeing from %s ...\n",suspend); return 0; } The power wake up times are different among Linux system core driver, Linux generic peripheral driver and Linux application. In 2012R1 Linux distribution for BF60X: Many operating conditions can affect power dissipation/consumption. System designers should refer to Estimating Power for ADSP-BF531/BF532/BF533 Blackfin Processors (EE-229) on the Analog Devices website () EE229 This document provides detailed information In general: Derived Power Consumption (PDDTOT) Internal Power Consumption (PDDINT) External Power Consumption (PDDEXT, PDDRTC) The Standby/sleep mode reduces dynamic power dissipation by disabling the clock to the processor core (CCLK). Furthermore, Standby/sleep_deeper. Complete Table of Contents/Topics
https://docs.blackfin.uclinux.org/doku.php?id=power_management_support
2017-09-19T15:11:02
CC-MAIN-2017-39
1505818685850.32
[]
docs.blackfin.uclinux.org
Manage HBase snapshots on Amazon S3 in Cloudera Manager You can manage your snaphots stored on Amazon S3 using Cloudera, the Take Snapshot dialog box Destination section shows a choice of Local or Remote S3. Select Remote S3. - Click Take Snapshot.While the Take Snapshot command is running, a local copy of the snapshot with a name beginning cm-tmpfollowed.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/hbase-backup-dr/topics/hbase-manage-snapsshots-s3-cm.html
2021-09-17T04:04:16
CC-MAIN-2021-39
1631780054023.35
[]
docs.cloudera.com
Date: Sun, 08 Nov 2015 09:18:39 -0600 From: Don Harper <[email protected]> To: "Arthur Chance" <[email protected]> Cc: "FreeBSD-Questions" <[email protected]> Subject: Re: HP Stream 13 and FreeBSD? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help I have tried on a HP Stream 11, but the CPU is a Skyline, and lots of things do not show up. Like, the internal storage. There is a PR up for it, but I have not checked in recently. Even under Linux, support is dodgy at best for the WiFi. I would love to help getting it going, so feel free to reach out to me for testing. I do not know enough at this point to start trying to fix things, yet. Thanks! don Don Harper, RHCE ---- On Sun, 08 Nov 2015 09:10:43 -0600 Arthur Chance <[email protected]> wrote ---- Has anyone managed to get FBSD running on an HP Stream 13? Google only turns up attempts to install Linux. -- Moore's Law of Mad Science: Every eighteen months, the minimum IQ necessary to destroy the world drops by one point. _______________________________________________ [email protected] mailing list To unsubscribe, send any mail to "[email protected]" Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=12140+0+archive/2015/freebsd-questions/20151115.freebsd-questions
2021-09-17T05:15:43
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
A test job is a set of one or more test scenarios that are associated with a specific environment configuration. When a test job is executed—on-demand or via automation tools such as the Environment Manager Jenkins plugin—the designated environment will be provisioned before the tests are executed. Test jobs are configured and run from the Test Scenarios page. Reviewing Available Test Scenarios and Jobs Test Scenarios The upper left panel lists all of the .tsts that are available in each connected SOAtest Server’s TestAssets folder. This list is automatically populated with the test scenarios from each server’s TestAssets folder (see Integrating Virtualize Server and/or SOAtest Server with CTP for details). You can also extend and modify these test suites directly from CTP; for details, see Building Scenarios and Tests. To view details about a scenario, select its test case tree node. Note that you can search for scenarios (top left of page) as well as jobs (middle left of page). The search covers .tst/job names as well as associated metadata. Jobs The Jobs panel allows you to create, search filter, review, execute, and remove jobs, as well as review/remove job execution results. The Jobs panel is automatically populated with jobs for all of the available SOAtest Test Executor component instances. These jobs are automatically synchronized with the associated SOAtest Test Executor component instances: if .tsts are added, removed, or reconfigured in the SOAtest Test Executor, those changes will be applied here as well. You can filter the available jobs to focus on specific systems and environments of interest. Adding Test Jobs To add a test job: - Do one of the following - From the list of jobs, click the Add New Job icon. - From the scenario details view, click the Create Job button. - Specify a name for the test job. - If you want to limit the job history that is saved, specify the maximum days and/or runs you want to allow. Click Add Test Scenarios and specify which test scenarios you want to run, in what order, and with what variables and data sources. - (Optional) If you want an environment provisioned upon test execution, specify the appropriate system, environment, and set of component instances in the upper right side of the page. Under Select an Instance, select a specific environment instance to use that snapshot of component instance settings or select Custom if you want to manually configure the active component instance settings. Executing Jobs Jobs can be executed directly from the UI, or as part of an automated Jenkins job. Executing Test Jobs from the UI To use the UI to kick off a test job and provision any associated environments: - From the Jobs panel, open the job you want to modify. - Click Execute. The specified test environment will be provisioned, then the tests will be executed. Progress and results will be indicated in the Jobs panel. Tip: Executing Your Own Version of a Saved Job—without Impacting Team Members Sometimes you might want to quickly run an existing job with new environment and/or variable settings—without saving changes to that job and potentially impacting other team members who are also working with that job. In this case, just select the job, configure the desired settings (environment context, variables, etc.), and click Execute without first saving the job. Be sure to select the top-level job node rather than a time-stamped job history node. Executing Test Jobs Automatically The Parasoft Environment Manager Plugin for Jenkins (available on Parasoft Marketplace) can automatically run test jobs as part of a Jenkins job. This plugin is designed to help you rapidly configure various actions needed for automated, continuous testing across your software delivery pipeline. For more details, see the description and documentation available on Marketplace. Reviewing Test Job Results To review a test job’s execution results: - From the Jobs panel, expand the associated test job, then click the test run you want to review. - Click View report to open the execution report. Icons in the Jobs panel indicate test outcomes. Managing Test Jobs Modifying Test Jobs To modify a test job’s .tsts or execution settings: - From the Jobs panel, open the job you want to modify. - Make the desired modifications. - Click Save. Cloning Test Jobs To clone an existing test job: - From the Jobs panel, open the job you want to duplicate. - Click the Clone Job icon. Clearing a Test Job’s History To clear a job’s history: - From the Jobs panel, open the job whose history you want to clear. - Click Clear Job History. Removing Test Jobs or Test Runs To remove a test job or test run: - Click the Delete icon in the Jobs panel. Distributing Test Job Execution Across a Cluster of Servers (Execution Group) If you want to distribute test job execution across an "execution group" (a cluster of SOAtest servers grouped under the same server name), ensure that those servers all have: - The same name (this is specified in the SOAtest Preferences> Environment Manager panel). - All of the .tst files you want executed in this manner. The first of these servers to connect to CTP will be treated as the primary server in the execution group; the others will be considered alternates. The SOAtest server page shows only the primary servers (one server for each unique server name). The page for a specific SOAtest server contains a table of the other servers in this same "execution group," along with their current status (online or offline). When the primary server is refreshed, all servers in that execution group will be refreshed. The list of servers in the execution group is also shown when you select the primary server in the Test Scenarios page. To run the distributed tests, simply ensure that all servers are running, then configure and execute the desired job to run on the primary server. If that primary server is busy executing another job, CTP will execute it on one of the other servers within that cluster.
https://docs.parasoft.com/display/CTP302/Using+Jobs+with+Automated+Provisioning
2021-09-17T04:22:40
CC-MAIN-2021-39
1631780054023.35
[]
docs.parasoft.com
DigitalSuite Studio Messages Messages is the DigitalSuite Studio module for monitoring the processing of emails for triggering processes and composite APIs. Emails can be used to launch processes and composite APIs from anywhere. To achieve this, you define the start event of a process or composite API with a message trigger. Depending on the trigger's configuration, the information contained in the emails is automatically mapped to input variables of the process or composite API. The Messages module shows the triggering emails sent to the email address of your account. Information is provided on the email content and the processing status, for example, whether the process has been launched successfully. If this is the case, you can switch to the Process Console to examine the corresponding process request. The Messages module, like all DigitalSuite Studio modules, has in-built, toggleable on-screen help that ensures that assistance is instantly available at the point of need. Please give details of the problem
https://docs.runmyprocess.com/Components/DigitalSuite_Studio/Monitor/Messages/
2021-09-17T03:48:30
CC-MAIN-2021-39
1631780054023.35
[array(['/images/DigitalSuite_Studio/Monitor/Messages/Messages.png', '.'], dtype=object) ]
docs.runmyprocess.com
Why apply to DoCS? DoCS offers a number of distinct benefits to prospective doctoral students: - World-leading researchers in Computer Science as supervisors and co-supervisors who pursue basic as well as applied research, - doctoral training across all research fields of the Faculty of Computer Science at the University of Vienna, - a structured doctoral programme that supports early-career research in the areas of computer science and business informatics, - an intensive doctoral teaching programme tailored to the scientific needs of students in small topical groups, - funding of travel expenses for trips to conferences, - mentoring and peer coaching activities, - summer schools and retreats, - short term research stays at relevant international locations (secondments) - external visitors and mentors, - performance-based contract extensions for doctoral students in well-founded situations on a competitive basis (in future) and - other smaller-scale measure as well as initiatives by doctoral students. Why do a PhD at the University of Vienna?
https://docs.univie.ac.at/research-and-doctoral-programme/benefits/
2021-09-17T04:09:57
CC-MAIN-2021-39
1631780054023.35
[]
docs.univie.ac.at
IBM DS3000 PrerequisitesPrerequisites This chapter describes the prerequisites installation needed by plugins to run. Centreon PluginCentreon Plugin Install this plugin on each needed poller: yum install centreon-plugin-Hardware-Storage-Ibm-Ds3000-Smcli The plugin need the installation of SMcli command. When you install the package, choose 'Management Station': Please choose the Install Set to be installed by this installer. ->1- Typical (Full Installation) 2- Management Station 3- Host 4- Customize... ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS <ENTER> TO ACCEPT THE DEFAULT : 2 After install, monitoring engine user needs root privileges to execute the command : # chmod 4775 /opt/dell/mdstoragemanager/client/SMcli Please ask to your support for the package. You can have following error if the storage firmware and SMcli client are not accurate : The XXXXX Modular Disk storage management software (version 11.10.0G06.0020) you are attempting to use is not compatible with the firmware on the RAID controller modules in Storage Array ANG1-D90002. If you have recently updated your RAID controller module firmware, you need to make sure that its compatible PowerVault Modular Disk storage management software has also been installed on all clients connected to this Storage Array. If the appropriate version is not available, please provide your Customer Support Representative with the following information. RAID Controller Module firmware version: 06.60.34.00 RAID Controller RAID Controller Module appware version: 06.60.34.00 Device API version required: Device API version required: devmgr.v0960api00.Manager SMcli from IBM or Dell can work with the storage. If you use IBM package, set following macros: - Host macro 'CLIEXTRAOPTIONS' = --smcli-path='/opt/IBM_DS/client' - Service macro 'EXTRAOPTIONS' = --verbose --storage-command='show storageSubsystem healthstatus;' Centreon ConfigurationCentreon Configuration Create a host using the appropriate templateCreate a host using the appropriate template Go to Configuration > Hosts and click Add. Then, fill the form as shown by the following table: Click on the Save button.
https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/hardware-storage-ibm-ds3000-smcli.html
2021-09-17T03:40:18
CC-MAIN-2021-39
1631780054023.35
[]
docs.centreon.com
Class Hierarchy¶ When working with the code or adding objects to the code once parsed it is good to understand the hierarchy. Scryber has a top level of Components. These create the basic level of document structure. Base Component Classes¶ The base classes form the foundation of the functionality for each of the main concrete classes. It’s much easier to create your own funtionality using one of these classes. - Component - Implements the IPDFComponent interface along with IPDFBindable and - ContainerComponent - Holds child instances. Implements the ‘InnerContent’ collection as a protected property and the IPDFContainer.Contents implementation. Also passes the lifecycle methods to children. - VisualComponent - Extends a container to be styled with the IPDFStyledComponent interface and a lot of properties for individual styles. - Panel - Base class for the standard container componnents (Div, Span, etc) that implements the IPDFViewPortComponent interface to do the laying out of content. - PageBase - Base class for all page types. - ImageBase - Base class for all the image types. - TextBase - Base class for the textual components and implements the IPDFTextComponent interface. - ShapeComponent - Base class for drawing components, implementing the IPDFGraphicPathComponent interface. All components inheriting from VisualComponent have a virtual method for GetBaseStyle() which returns the default style for that component before anything is applied. The PDFObjectType used in constructors is simply a 4 character struct that can identify the type of component, and can also be used for generating ID’s. It can be directly cast from a string value or const string. Note Any custom classes should include a parameterless constructor if they sould be parsed as part of some other xml/xhtml content. Collection classes¶ Scryber maintains a bi-directional structure to the document content graph. When a component is added as a child to a container, that child’s Parent value is updated to the container. As such, each child knows it’s parent, page and document. In order to maintian this the ContainerComponent creates the ComponentList class with itself as an owner. Scryber then provides the abstract ComponnetWrappingList and ComponentWrappingList<T> classes for stronger typing on the contents of the collection and implementing the ICollection<T> interface. e.g. The TableGrid has a Rows property of TableRowCollection which inherits from ComponentWrappingList<TableRow> and wraps the ContainerComponent.InnerContent. public class TableGrid : ContainerComponent { //Strongly typed collection of Rows that will have their parent set automatically. public TableRowList Rows { get { if (this._rows == null) this._rows = new TableRowList(this.InnerContent); return this._rows; } } } //The strongly typed collection for TableRows. public class TableRowList : ComponentWrappingList<TableRow> { public TableRowList(ComponentList content) : base(content) { } } Concrete Component Classes¶ All the components are in the Scryber.Components namespace and inherit from VisualComponent - PageBase - Page - a single page by default style, with a Header and Footer. - Section - allows multiple layout pages by default, with a continuation header and footer. - PageGroup - a set of PageBases instances but also a Header, Footer and continuation header and footer that are passed down. - Panel - Div - basic concrete panel implementation with contents and a full width. - BlockQuote - basic block quote implementation with a custom style of 10pt margins. - Canvas - a container where all content will be relatively positioned and the content clipped to the canvas bounds. - List - contains inner list items. - ListOrdered - a list that has a default decimal numbering. - ListUnordered - a list that has a default bullet addornment. - ListDefinition - a list that has terms and content. - ListItem - the individual items in a list. - UserComponent - allows the dynamic loading of content from a remote source. - Paragraph - a block of inner content, with a 4pt margin at the top as a default style. - Preformatted - a block of inner content, with a default style for rendering code. - TextBase - Date - renders the current of defined date in a specific format. - Number - renders a numeric value in any specific format. - PageNumberLabel - renders the current page (along with totals) in any specific format. - PageOfLabel - renders the page number of another component. - TextLiteral - A non-visual component for text strings, including assigment within the constructor. - TableGrid - A layout of content in a tabular way. - TableRow - A single row of cells within a grid. - TableCell - the final content of the cells in a table grid. - ShapeComponent - HorizontalRule - basic flat line. - Line - Line that supports a position and size. - Path - Complex path definition with M(oves), L(ines to) etc. - PolygonBase - Polygon - Multi-sided shape with style. - Rectangle - A 4 sided shape with style. - Triangle - Just the 3 sides. - Ellipse - A box bounded circle or ellipse with style. - PageBreak - Forces the flow onto the next page if possible. - ColumnBreak - Forces the flow onto the next column or page if possible. - LineBreak - Forces the flow onto a new line. Html Classes¶ When parsing content from HTML the document component graph will be constructed from subclasses of the main components in the Scryber.Html.Components namespace. namespace Scryber.Html.Components { [PDFParsableComponent("div")] public class HTMLDiv : Scryber.Components.Div { } } Layout content¶ In the creation of a PDF document, the components above are used to create the actual layout items. These are much more basic, but know how to generate the pdf content streams and data used by PDF readers. If a document has a Page, and then a Section with 2 page breaks - the layout will be 4 pages long with all the text and runs in the respective pages. If needed any component can implement or override the IPDFViewPortComponent interface and return a new LayoutEngine for that component. The LayoutEngineBase and LayoutEnginePanel are good starting points to layout your own custom content. - PDFLayoutDocument - Top level holding font references, image resource references and the list of layout pages. - PDFLayoutPage - A single page of a content block, with an optional header content block and or footer content block, and any absolutely positioned regions. - PDFLayoutBlock - A grouping of one or more column regions along with any relatively positioned regions, that will render the style. - PDFLayoutRegion - A single continuous set of lines and/or other blocks. - PDFLayoutLine - A single line of content runs. - PDFLayoutRun - A single lightweight atomic graphical content operation. - PDFTextRun - Textual operation - PDFTextRunBegin - Start of the text, includes setting the font etc. - PDFTextRunCharacter - Text Drawing operation - PDFTextRunNewLine - Simple line break operation - PDFTextRunProxy - Placeholder for text to come from the owning component. - PDFTextRunEnd - Completion of text. - PDFTextRunSpacer - Offset of a line run to allow for other content. - PDFLayoutXObject - Renders PDF content as a separate stream, return the reference to that stream. - PDFLayoutComponentRun - allows the owning component to render it’s own content explicitly (e.g. Paths). Content Styles¶ The style classes are based around a dictionary of inherited and direct style item keys with storongly typed style value keys. All of the standay ones are defined in the Scryber.Styles.StyleKeys static class. If a style value is inherited, the it will be copied to any descendent element (e.g. FontFamily) and any direct value will only be used on the component it is defined on (e.g. BackgroundColor) Implementor can create their own style items and keys as needed using the static constructor methods with distinct object types (use mixed case to ensure they are unique). const bool INHERITED = true; var tocStyle = StyleKey.CreateStyleItemKey((PDFObjectType)"Ctoc", INHERITED); var tocLeader = StyleKey.CreateStyleValue<LineStyle>((PDFObjectType)"Ctld", tocStyle); This can then be used on any style definition or styled component to get or set a value, it can be bound to a value, and as it is inherited, will flow down with the content (merged). var styleDefn = new StyleDefn(); styleDefn.SetValue(tocLeader, LineStyle.Dotted); LineStyle default = LineStyle.None; var defined = styleDefn.GetValue(tocLeader, default); if(styleDefn.TryGetValue(tocLeader, out defined) { //Do something with the defined style. } The style class hierarchy is as follows. - StyleBase - root abstract class that holds the actual values. - Style : StyleBase - the main class used on components themselves directly. - StyleDefn : Style - has a class matcher property that will ensure that this style is only applied to Components that match. - StyleFull : Style - a readonly, locked set of style values with known values - position, font, padding etc. - StyleGroup : StyleBase - a collection of style base items, that can be treated as one item in an outer collection. The document has a Styles property which is a StyleCollection, so any of the above can be added to the the document. Each VisualComponent has a Style property where these values can be directly applied. The flow for creating a full style for a component is linear. 1. The GetBaseStyle returns a new instance the standard style for a component. 1. If the component inherits from a super class VisualComponent then it should call the base.GetBaseStyle() and apply any styles to that before returning. 1. The GetAppliedStyle is then called with the base style. 1. This traverses up the component hierarchy, finally reaching the document. 1. The document calls MergeInto on its style collection with the base style. 1. Each style within the collection is MergedInto the style. 1. If that style is a StyleDefn it is checked to make sure it is matched, before being merged. 1. If that style is a StyleGroup, the it calls MergeInto on its own collection of styles. 1. If it should be merged, then each style value is assessed to see if it exists and compares the priority. 1. If the style that should be merged is a higher priority then the value is replaced. 1. We then come back to the original component and any direct styles are applied to to orginal base. 1. Once this is done it is pushed onto the StyleStack, where the hierarchy of styles from parent components are. 1. And finally a full style is built based on inherited and direct values. 1. That full style is retained and used through the rest of the layout and rendering. Despite the number of steps, the build of styles is usually not an issue, compared to extracting font files, image binary data or encrypting streams. However for some documents with a large number or containers e.g. a very long table with many rows it can become the limiting factor as well as memory intensive. The template element automatically caches the style for each of the inner contents, rather than building every time. This can speed the generation, but if it causeing issues can be switched off using the data-cache-styles=false attribute. This will force the styles to be built each and every time. <templalate data- <tr> <td class='desc-cell' >{@:.Description}</td> <!-- can be applied individually so that they are cached --> <td class='val-cell' data-{@:.Value}</td> </tr> </template> Why and when to implement¶ A lot of the time, it is easier to use compound components to build all the main characteristics of the content needed. However sometimes there is a need to use explict functionality or capabilities that are not currently available. At scryber we also use this framework extensively to provide new top level features with safe knowledge the lower engine layers can deal with the grunt work. See Scryber Trace Log and Extending Scryber - td along with Namespaces and their Assemblies for more on this.
https://scrybercore.readthedocs.io/en/latest/document_code_classes.html
2021-09-17T04:33:15
CC-MAIN-2021-39
1631780054023.35
[]
scrybercore.readthedocs.io
You can place multiple Poll Voting Web Parts on a page to display a series of results from separate polls or the same poll in one place. Just create a Web Part page that has multiple zones and add one or more Poll Voting Web Part(s). In the image below, two Poll Voting web parts are shown on the page so that users can vote and see the results all on one page. Poll and Poll Results The Poll Admin Web Part allows you to create, edit, and delete poll questions, review poll responses, and export poll responses to Microsoft Excel. Poll questions can be configured to allow anonymous users to vote, to set expiration dates for poll questions, and to allow users to select more than one answer (check boxes) or only one answer (radio buttons) to a poll question. You can also set the order in which the possible answers to a poll question are displayed. The Poll Admin Web Part can be added to a separate site within a site collection than where you decide to place the Poll Voting Web Part. This allows you to restrict access to the Poll Admin Web Part. If you choose not to place the Poll Admin Web Part on a restricted site, then you can also set access security on the Polls, Poll Answers, and Poll Votes Lists. Follow these links to work with the two web parts:
https://docs.bamboosolutions.com/document/overview_of_poll_web_part/
2021-09-17T04:47:46
CC-MAIN-2021-39
1631780054023.35
[array(['/wp-content/uploads/2017/06/hw38-using7_2013.jpg', 'hw38-using7_2013.jpg'], dtype=object) array(['/wp-content/uploads/2017/06/hw38_PieResults_new_2013.jpg', 'hw38_PieResults_new_2013.jpg'], dtype=object) ]
docs.bamboosolutions.com
Date: Thu, 21 Jan 2010 11:37:03 +0100 From: Ivan Voras <[email protected]> To: [email protected] Subject: Re: VirtualBox - does it work for FreeBSD? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On 01/21/10 08:11, Glyn Millington wrote: > > Good Morning :-) > > A quick question for anyone running VirtualBox on a FreeBSD _host_ > machine, with Linux or Windows as a _guest_ OS. > > > Does it work? Yes. > at is, do the guest additions work fully for those guests, or are > there limitations such those I experience currently when running > FreeBSD 8 as the guest? (eg no access to USB, no fullscreen mode). You are right about USB and probably about fullscreen mode. "seamless" mode works at least for Windows. > Any major problems with a FreeBSDS host and Windows/Linux guest? You might have problems with using virtual SMP. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=787993+0+archive/2010/freebsd-questions/20100124.freebsd-questions
2021-09-17T05:20:13
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
Builder The Audience Builder is where you define your audience. When you open an existing audience, the Builder will show its building rules and will let you edit over them. Insights The Insights section helps you understand your audience better. You can see the total number of users in the audience, how many can receive emails or push notifications and how many have been active in the last 7 days. Use the Group by to expand the insights for a particular audience. Engagements For each audience, Leanplum will show you a list of active Campaigns, Messages, A/B tests. This helps make more informed decisions when editing an audience. Actions Click on Save Audience to save your audience in its current state or go to Explore Users to see a list of the user profiles in the audience. Through More Actions you can also: - Create a campaign that engages the audience - Download the users in the audience in a CSV - Delete the audience Updated 5 months ago
https://docs.leanplum.com/docs/audience-details
2021-09-17T04:54:09
CC-MAIN-2021-39
1631780054023.35
[array(['https://files.readme.io/30b8e77-Audience_Details.png', 'Audience Details.png'], dtype=object) array(['https://files.readme.io/30b8e77-Audience_Details.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0e93436-Builder.png', 'Builder.png'], dtype=object) array(['https://files.readme.io/0e93436-Builder.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/aebe300-Insights.png', 'Insights.png'], dtype=object) array(['https://files.readme.io/aebe300-Insights.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0e10a27-Engagements.png', 'Engagements.png'], dtype=object) array(['https://files.readme.io/0e10a27-Engagements.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0efbc2d-11.png', '11.png'], dtype=object) array(['https://files.readme.io/0efbc2d-11.png', 'Click to close...'], dtype=object) ]
docs.leanplum.com
全新内容 在虚幻引擎4.25版本中,我们仍然在努力为大家提供最完善、最灵活、最强大,且操作便利的实时3D创建工具。新版本进一步拓展了工具集,加入了 更多游戏和可视化开发领域的内容创建与编辑 功能,同时改良了功能集、改善了现有工具的工作流程。线性媒体工具得到了改良,能够更高效地创建身临其境的线性与动画内容。对现有平台的支持也得到了进一步提升,并 为次世代游戏主机提供一流支持。 设计者和创作者从终端产品中移除的内容越多,越难以实现最初的创作愿景。虚幻引擎4.25版本提供了全新及改良的系统和工具,使创作者能够以前所未有的自由度在编辑器的情境中直接对最终内容进行创建、修改和迭代。Niagara视效和Chaos物理系统已得到改良和拓展,能够提供 高质量、高性能动态模拟。Control Rig和Sequencer已结合,能实现 编辑器内的绑定角色动画。同时还添加了建模和造型工具,能够 在视口中构建和更新环境。 虚幻引擎继续提供新鲜好用的方式,让用户可以制作出混合数码元素的影片和线性内容。新增的影片渲染管理器(Movie Render Manager)能够 以简化工作流输出高质量渲染,并整合到任意流程中。nDisplay多显示技术也变得 更容易配置且更强大,能让任意虚拟产品通过LED壁投和大型场馆显示屏进行展示。除此之外,新版本还对其他几个领域进行了改良,如摄像机内视效、直播和实况工具、3D文本动画打造的动态图形,用户甚至还能 在虚幻编辑器中直接生成表格和图表。 在每个版本中,我们都倾注了不少精力来提升对各个平台的支持,以便开发者的项目面向更广平台的用户。此版本新增了 对次世代游戏主机的支持,也就是微软Xbox Series X及索尼PlayStation 5。此外,iOS与Android的 渲染和开发工作流程已得到大幅改进。HoloLens 2和Magic Leap之类的增强现实设备也加入了新功能,开发工作流程同样得到了改良。 此版本包含了GitHub虚幻引擎开发者社区提交的114项改良!特此对虚幻引擎4.25版本的每位贡献者表达诚挚谢意: Filippo Tarpini "Filoppi", Doug Richardson "drichardson", Daniel Marshall "SuperWig", Doğa Can Yanıkoğlu "dyanikoglu", "projectgheist", Artem V. Navrotskiy"bozaro", "KieranNewland", "sturcotte06", "ohmaya", Morva Kristóf "KristofMorva", Kalle Hämäläinen "kallehamalainen", Eric Spevacek "Mouthlessbobcat", "mattpetersmrp", "doublebuffered", "SaltyPandaStudios", "Fi",Charles Alexander "muchcharles", Kristján Valur Jónsson "kristjanvalur", "solasin", "zzgyy123", Alex Stevens "MilkyEngineer", Artem Gurevich "glcoder", Konstantin Heinrich "kheinrich188", Jørgen P. Tjernø "jorgenpt", "CA-ADuran", Muhammad A.Moniem "mamoniem", Paul Greveson "moppius", Cristiano Carvalheiro "ccarvalheiro", "farybagj", Robert Khalikov "nbjk667", "ecolp-improbable", "unrealTOM", "fuchuanjian2016", Ilya "Neo7k", Mikhail Zakharov "real-mikhail", Cameron Angus "kamrann", Sam Bonifacio "Acren", Michael Kotlikov "mkotlikov", "Yaakuro", "dbaz", "kshokhin", Joe Best-Rotheray "cajoebestrotheray", Selivanov Artem "broly", Toru Hisai "torus", Lucas Maldonacho "Maldonacho", Maxim Marinchenko "mmarinchenko", Matan Shukry "matanshukry", Ben Hetherington "BenHetherington", "No-Face-the-3rd", Jeong ChangWook "hoiogi", Geordie Hall "geordiemhall","jessicafolk", Franz Steinmetz "Kartonfalter101", "fieldsJacksonG", Rune Berg "1runeberg", "DCvipeout", Nick Edwards "NEdwards-SumoDigital", "harshavardhan-kode-sb", Lukas Tönne "lukastoenneMS", Noah Zuo "NoahZuo", Chris "minimpounminimpoun", Viatcheslav Telegin "vstelegin", Ryan "RyanyiNF", smallshake "smallshake", Sergey Igrushkin "shkin00", Punlord "Punlord', Andrey Chistyakov ,"roidanton", Vasilii Filippov "DrizztDoUrden", Yi Qingliang "Walkthetalk", HugoLamarche "HugoLamarche", Gerke Max Preussner "gmpreussner", "RPG3D", Zak Strange "StrangeZak", "heavybullets", Alexander "DecoyRS", Martin Nilsson "ibbles", Peter Giuntoli "petergiuntoli", Torsten Raussus "Getty", Evan Hart "ehartNV", Arvid Hansen Díaz "DonHansonDee", Gregory Hallam "greghs". 主要功能 新增内容:次世代平台支持 虚幻引擎4.25将首次为索尼PlayStation 5和微软Xbox Series X提供一流的平台支持。今年,我们还将继续更新4.25的后续分支,提供优化、修复和验证以帮助开发者在次世代主机上发布产品。新功能包括针对各平台的功能,如全新音频改良,首次支持在线子系统,以及对TRC和XR验证的早期支持。 新增内容:Unreal Insights改良 新版本对 Unreal Insights 进行了用户体验及架构改良,加入独立的追踪浏览和记录、全新查看器进程及追踪组织通道。此版本还包括Timing Insights的搜索和可视化,新增"Network Insights"窗口,帮助用户优化和调试网络流量。 Unreal Insights用户体验改良(测试版) 此版本中,Unreal Insights包含以下用户体验和架构改良: 在新版本中查看追踪将启动单独的Insights进程,将追踪的浏览和记录清晰地分离开 选择新追踪时,Unreal Insights支持自动生成新的查看器进程 追踪事件纳入通道,可在Insights中开关实时会话的这些事件 用户可以从命令行选择通道 Timing Insights 在新版本中,用户可以使用定时事件过滤来改善搜索体验。用户可以借助图表支持改良来追踪和显示数据相关性的指标。 欲知详情,请参阅Unreal Insights。 Networking Insights(实验性) 首先了解如何获取和使用Unreal Insights。 Unreal Insights包含用于优化、分析和调试网络流量的 Networking Insights。用户可以利用以下功能记录追踪信息,以显示网络行为: 游戏实例(Game Instance)功能按钮,用于在记录网络会话期间显示可见机器 连接模式(Connection Mode)功能按钮,用于显示传出或传入的数据 数据包概览(Packet Overview)面板,用于显示游戏期间传输或接收的数据包时间轴(和大小) 数据包内容(Packet Content)面板,用于显示数据包内容,例如重复的对象、属性和远程函数调用 网络统计数据(Net Stats)面板,用于显示选定数据包的追踪事件,包括数据包总大小、大小上限、以及排除式或包含式数据包平均大小 欲知详情,请参阅Networking Insights。 Animation Insights(实验性) 新版本中,编辑器拥有 Animation Insights 插件,可显示gameplay状态和实时动画行为。用户可以使用以下功能记录追踪信息,以显示动画行为: 通道过滤,选择写至记录数据集的追踪数据 源过滤,选择输出追踪数据的gameplay对象 姿势、曲线、混合权重、动画图表、蒙太奇和动画通知轨迹 包含实时更新的动画图表原理视图,用于替换 'showdebug animation' 系统 欲知详情,请参阅Animation Insights。 欲详细了解追踪会话和追踪录制器,请参阅Unreal Insights概述。 新增内容:Niagara已可用于生产 Niagara视效(VFX)系统已可用于生产。Niagara已通过严苛测试,用途广泛:它既能制作《堡垒之夜》中的视效,也能制作高端技术演示项目或进行电视电影虚拟制片。新版本对用户界面进行了改良,更便于使用。性能和伸缩性也得到了极大提升。另外还新增了音频数据接口、粒子间粒子通信等诸多功能。 Niagara的粒子属性读取器 新版本中,用户可以使用新的粒子属性读取器数据接口 直接访问其他粒子的属性。发射器或系统可直接读取其他粒子的参数(如位置、颜色、年龄等),从而实现聚集、距离约束、尾迹等各种效果和行为。 粒子属性读取器数据接口可以从使用它的同一发射器、或从同一系统中的其他发射器中读取粒子属性。当它从自己的发射器读取时,将返回上个tick或模拟阶段的数据。当粒子属性读取器从不同的发射器读取时,Niagara会先模拟它正在读取的发射器,然后再模拟粒子属性读取器的发射器。如此,数据接口便能从同一tick返回结果。 Niagara的参数面板UI改良 Niagara脚本编辑器的参数面板布局已得到更新,极大提高了Niagara脚本中所使用参数的清晰度。新布局与蓝图参数面板布局类似。参数不再按命名空间(系统、发射器、粒子等)分类,而是按描述其在脚本中用途的功能标签(输入、引用、输出和本地)来分类。新版本中,用户也可以用下拉菜单来选择参数的目标命名空间! 输入(Input) 类别中的参数可由脚本的用户配置(之前在 模块(Module) 命名空间中写入的参数也是如此)。引用(Reference) 类别中的参数不可配置。输出(Output) 类别中的参数包括脚本修改的任何内容。本地(Local) 类别中的参数在脚本中定义,且仅在该脚本中有效。 音频示波器和音频频谱数据接口 用户可以利用新版本中的 音频示波器(Audio Oscilloscope) 和 音频频谱(Audio Spectrum) 数据接口将Niagara系统连接到虚幻音频引擎。音频示波器 可用于直接访问音频信号的波形,而 音频频谱 可用于创建可视化效果,该效果将随音频在特定频率下的响度而变化。 这些接口能够尽量减少延迟和性能开销。这两个接口默认使用游戏的最终音频,但也可以设为仅使用特定的音频副路混合。 Niagara效果类型资产 效果类型资产可用于设置伸缩性和性能相关设置的默认值,这些设置在特定类型的所有效果之间共享,以便在项目中不同的效果类型间保持一致性并执行预算限制。 平台的性能可扩展性 新版本中可逐平台指定Niagara效果的性能伸缩性,对平台伸缩性进行简单明了的控制,同时仍可根据需要微调性能。 可以使用质量级别来控制启用发射器的平台,例如只对拥有 低 和 中 效果质量的设备启用发射器。为便于更好地控制,可以使用各个质量级别下的设备概述文件树,在平台集中包含或排除特定的设备概述文件。 低效果平台下拉选项 基于平台的伸缩性也能够在效果类型资产的伸缩性设置中进行指定。详情请见"Niagara效果类型资产"章节。 发射器继承重设父项 新版本中,用户可以在创建后使用Niagara中的继承来更改发射器的父项,以增强灵活性、并在保留发射器观感的同时重组项目中的资产。父项变化时,子发射器中与新父项匹配的所有模块都将更新;但与新的父发射器不匹配的所有内容都将保留在子发射器上。 选中发射器后,设置菜单将包含名为 更新父发射器(Update Parent Emitter) 的选项。选择该选项显示菜单,可在其中选择新的父发射器。选择新的父发射器后,子发射器将自动更新。 点击齿轮图标打开发射器设置 用于模块创建的Niagara作用域 Niagara现在拥有一个新的模块创建概念模型,名为 作用域(Scopes),它定义模拟运行从开始到结束的信息流。举例而言,系统作用域 流向 发射器 和 粒子 作用域;发射器作用域 流向 粒子作用域,等等。作用域取代了 命名空间 的概念。结合类别(输入(Input)、引用(Reference)、输出(Output)、本地(Local)),使参数在Niagara脚本编辑器中的使用更标准化、更清晰。 用户更新至4.25版本时,现有参数将更新至新范例。 Niagara暂存区 使用Niagara系统和发射器编辑器中的全新暂存区,用户可以创建包含在发射器或系统资产中的模块和动态输入,尝试为效果开发模块或动态输入,并实时查看结果!可从现有的脚本资产导入模块和动态输入脚本,在暂存区中创建的内容也可导出至现有或新的脚本资产。还可使用暂存区创建专属于所创建系统或发射器的内容。可直接在暂存区中进行创建,在发射器或系统内使用,无需新建一个其他方面用途有限的脚本资产。 Niagara选择面板UI改良 新版本已对Niagara系统和发射器编辑器进行改良!现可以使用右键快捷菜单复制、剪切和粘贴模块、模块输入(包括动态输入)和渲染器。此外用户还能够自定义模块显示名称、改善模块的高亮显示,并为设置变量(Set Variables)模块命名创建更好的UI。系统概述中的发射器节点现在将显示这些发射器中所用材质的缩略图。 新增内容:着色模型改良 此版本对材质着色模型进行了较大改良与增补,支持各向异性属性、带精确物理属性的真实着色半透明材质,以及能准确响应不同时间光线物理变化的改良透明涂层着色模型。 这些改良十分适用于创建汽车和建筑视觉表现的材质,以及用于媒体和娱乐。 各向异性材质支持(测试版) 我们已利用GGX各向异性公式实现了支持正确各向异性材质的第一步。在此版本中,这些材质将支持准确的IBL光源,以及在透明涂层和默认光照着色模型材质中的反射。而针对区域光源,BRDF则回退到之前使用的各向同性响应。 为支持各向异性材质,我们已对GBuffer进行扩展,可在BasePass不发射速度时提供切线缓冲。用户可以选择通过材质进行顶点变形,在速度传递期间触发对象渲染,免去在BasePass过程中输出速度的麻烦。这意味着可启用切线缓冲,且BasePass过程中没有速度,但不会丢失任何功能。 在项目中启用各向异性材质的方法是使用项目设置(Project Settings)将 使用各向异性BRDF(Use Anisotropic BRDF) 设为true。在材质中,主节点现有 各向异性 的输入,用于控制光沿表面照射的方向。 基于物理的半透明度 新版本添加了基于物理原理的新半透明着色模型 薄半透明(Thin Translucency),能够在单通道中处理材质(例如彩色或有色玻璃)。这样能使材质根据物理原理准确地呈现透明材质,同时玻璃着色器的质量和性能也优于之前使用的方法。 举例而言,在创建有色玻璃材质时,需要添加白色高光,并将其用于背景上色。在旧版本中,要实现这种效果,唯有使用单独的对象处理白色高光,同时执行复杂的设置,才能以正确的顺序混合。薄半透明着色模型根据物理原理使用着色器在单通道中渲染,着色器会考虑光线从空气反射到玻璃和从玻璃反射到空气中的情况。 执行以下操作,在材质(Material)中启用薄半透明(Thin Transparency): 将 着色模型(Shading Model) 设为 薄半透明(Thin Translucent)。 将 光照模式(Lighting Mode) 设为 表面前向着色(Surface ForwardShading)。 将纹理或颜色表达式插入 Thin Translucent Material Output 表达式。 透明涂层着色模型改良 此版本显著提高了透明涂层着色模型的物理准确度。新版本对模型进行了修改,并首先使用路径追踪器实现这些修改,所以能够根据引擎中生成的地面实况图像来提高传统栅格化和光线追踪技术的质量。 此外,虚幻引擎的透明涂层模型在设计时便考虑到了环境光照因素。在新版本中它对精确光源有正确的方向响应,其中衰减和菲涅耳项主要针对带有方向性的光源,而环境光照则保持不变。 新增内容:Chaos物理系统更新 Chaos物理系统现在已成为《堡垒之夜》版本更新的必备工具之一,它支持以下功能: 破坏——这个高性能系统能实时打造影片级的大范围破坏 静态网格体与碰撞的动态交互——场景中动态物体的刚体模拟 布料——布料、旗帜及其他织物的动态布料模拟 毛发——基于发束、准确反映物理特性的毛发模拟 刚体骨骼控制——马尾辫之类物体的简化模拟 场景查询——在场景中执行追踪 新增内容:高质量多媒体导出(测试版) 高质量多媒体导出(High-Quality Media Export)是Sequencer影片渲染功能的升级版,可以提升质量、简化生产流程集成,并提升用户可延展性。使用高质量多媒体导出可以累积多个渲染样本来生成最终输出帧;从而实现更高质量的抗锯齿、径向动态模糊,并减少光线追踪中的噪点。 高质量多媒体导出还支持一些新功能来生成高质量渲染,如支持分块渲染的全新高分辨率设置,其能克服GPU内存限制和设备超时限制。用户还可以导出半透明图像(借助相应项目/场景设置),生成拥有线性数据的16位HDR图像,并将渲染配置保存为开发者可以重复使用并与分享的资产。最后可通过新的渲染队列进行批处理,从而轻松批量渲染多个序列,其类似Adobe After Effects中的渲染流程。 最终,高质量多媒体导出将取代Sequencer的现有渲染影片功能。在当前版本中导航至 窗口(Window)> 过场动画(Cinematics)> 影片渲染队列(Movie Render Queue),再将关卡序列添加至渲染队列便能使用高质量多媒体导出。请注意:高质量多媒体导出暂时还没有渲染影片的完整功能,因此Sequencer仍将默认使用渲染影片。 欲知高质量多媒体导出的详情,请参见高质量多媒体导出概览。 全新内容:音频系统更新 虚幻音频引擎中新加入了卷积混响处理和声场渲染。音频设计者可以模拟出高度真实的声效空间,将听众包围在其中,从而使项目拥有身临其境的真实音效体验。 卷积混响 用户可以利用全新的 卷积混响 效果,使用来自真实或设计空间的样本在虚拟环境数码模拟真实声效,构建更真实的混响。 卷积混响是对传统混响技术的改良,由数据驱动。它并不是使用延迟缓冲、滤波和各种其他DSP拓扑的组合来模拟混响,而是使用音频采样来用作真实或设计空间的声学测量,以模拟实际房间和环境的声音。 卷积混响能增强在舞台之外或录音棚中录制声音和对话(通常是在后期制作中)的"临场感",因此通常用于电影电视制作中,打造出更丰富、更逼真的声音混合。 原生声场环境立体声渲染 虚幻引擎新版本支持渲染 声场环境立体声,来提供全沉浸式的空间音效。声场与传统环绕声有所不同。环绕声资产可提供静态声床的良好展示;而环境立体声资产能包围听众,让音效在声场中旋转,打造出交互环境更真实的沉浸体验。 球面谐波函数前4个波段的视觉展示。蓝色部分是函数为正的部分,黄色部分代表函数为负的部分。图片由Inigo.Quilez提供。此文件已获得Creative Commons Attribution-Share Alike 3.0 Unported授权。 声场是音频对光照球面谐波的音频模拟。使用引擎中的声场录音作为音频"天空盒",声场可以相对听众和声源方向旋转。 原生声场通过环境音效极大提升空间深度,无视用户的扬声器配置。 因为可以用专用麦克风从现场录音中采集声场,因此渲染这些声场可显著增加场景的真实感。 新增内容:LiDAR点云 虚幻引擎现已内置激光扫描点云,能更轻松地将真实世界带入实时展示。 我们对商城中LiDAR点云插件的性能和伸缩性进行了重大改良,添加了新的设置和功能,使其可直接在虚幻编辑器的插件(Plugins)窗口中使用。 导入 支持几个主要的点云格式,包括ASCII(.txt、.xyz、.pts)和.las文件。 可将文件拖到内容浏览器上,或在运行时使用蓝图API将其导入编辑器。 异步导入可避免在加载内容时锁定引擎。 渲染 点云可以投射和接收动态阴影,有助于研究太阳和光照。 支持极大的数据集,以及文件按需数据流送和GPU流送。动态的细节等级(LOD)系统可保持较高性能,同时通过优先处理视口中央的点来保持视觉效果。 使用简单点或自定义材质来渲染数据。 多种着色技巧(RGB、强度、仰角、分类等等)。 广泛的颜色调整。 用包含的后期处理材质所提供的Eye-Dome光照技术加强形状。 交互 在编辑器视口中隐藏、删除、合并和提取单独的点。 根据点数据创建碰撞网格体。 使用蓝图API在编辑器中或运行时新建云和添加点。 新增内容:HoloLens 2改良 虚幻引擎中的HoloLens 2经过了大量改良,能更高效地开发针对增强现实平台的应用程序,并为用户带来更强的参与性和沉浸感。 图片由微软提供 虚幻引擎新版本支持从桌面应用程序远程运行应用程序,无需再运行虚幻编辑器。 我们还添加了实验性的OpenXR支持(可在HL2上显示像素)以及后期重投影,可实现从第三人称摄像机视图中进行混合现实采集,并消除来自帧、深度和第三人称摄像机渲染缓冲的渲染目标副本,每帧可减少2毫秒帧渲染开销。 Azure空间锚(测试版) 虚幻引擎现在还支持Hololens 2的Azure空间锚,全息图可持续留存在会话之间的真实空间中。 Mixed Reality UX Tools Unreal现在支持使用混合现实UX工具插件,该插件通过GitHub上的开源插件为混合现实开发者提供了一组UX工具(按钮、操作器、近+远交互、跟随行为),以便加速开发。 Unreal还支持在UE4的PIE视口中模拟双手,这使得开发者可以使用模拟出的双手来测试混合现实应用,而不需要将应用部署到模拟器或设备上(需要混合现实UX Tools插件来提供手部模型) 新增内容:光线追踪更新,可用于生产 随着新版本的发布,我们高兴地宣布,虚幻引擎的光线追踪功能现已就绪,可用于生产!经过之前数个版本的开发后,我们终于实现了这个目标。新版本还将继续添加新功能并改善现有功能,同时保持稳定性和性能。 左上:堡垒之夜 | Epic Games;右上:塞娜的传说:地狱之刃II | Ninja Theory:左下:A5 Cabriolet模型由奥迪提供,HDR图像由HDRI Haven提供:右下:建筑可视化内饰渲染示例 | Epic Games 本版本有以下新增内容和改良: 在CPU和GPU上添加了对Niagara网格体发射器的支持。 添加了对全新各向异性着色模型的支持。 极大改良了透明涂层着色模型结合光线追踪功能使用时的质量。 向路径追踪器添加了透明涂层BRDF材质的支持,可生成地面实况对比。 添加了对次表面分析材质的光透射支持。 新增内容:导航改良(测试版) 我们在虚幻引擎4.25中对寻路网格体进行了一些改良,重点是提高灵活性并进行优化。 离线寻路网格体构建 可将-BuildNavigationData添加到ResavePackages命令行,在离线状态下重新构建寻路网格体。开发者可借此在自定义构建流程中更轻松地构建寻路网格体。 用于FindPath查询的CostLimit参数 新版本中,FPathFindingQueryData结构拥有CostLimit变量。CostLimit限制了可在导航期间添加到开放列表的节点总成本,提供了一种提前退出寻路进程的方法(尤其是在处理较长的路径时)。 CostLimit默认值为FLT_MAX,但可在FPathFindingQueryData中手动定义限制,或使用ComputeCostLimitFromHeuristic计算限制。 在寻路网格体下方填充碰撞 新版本中UStaticMeshComponent拥有一个名为bFillCollisionUnderneathForNavmesh的属性。设置此属性后,其将模拟用碰撞填充静态网格体下方的空间。若对象位于寻路网格体边界体积内,则不会在对象表面之下生成寻路网格体。这种方法可以排除不需要生成寻路网格体的区域,用多层碰撞优化关卡。 用重投射寻路网格体生成器重新生成时间分片寻路网格体 重新生成实时寻路网格体的开销可能极高,通常会导致网络游戏出现长时间的服务器高峰。时间分片寻路网格体重生生成将重新生成寻路网格体图块的过程分布到所需时间段中,大幅降低非异步寻路网格体生成造成处理高峰的可能。 新增内容:动画时间轴重构(测试版) 新版本中,动画资产的资产编辑器使用类似于Sequencer的时间轴,在处理动画数据时能提供更一致的外观和感觉。新版本中,动画蒙太奇部分的编辑将在拥有breadcrumb轨迹和下拉菜单的专属选项卡中完成。此外,动画曲线现在使用类似于Sequencer的全功能曲线编辑器。 新增内容:动画压缩改良(测试版) 动画压缩系统现已全面升级,提高了灵活性和生产效率。新版本中插件能够指定新的动画压缩方案和压缩数据结构。动画序列现在引用压缩设置资产,其中包含一个或多个应用于动画序列的编码解码器,取代了旧版本中硬编码的"自动压缩"概念。新版本中的动画压缩为异步操作,在虚幻编辑器中作为非阻碍性的操作执行,因此在压缩过程中可以继续进行其他操作。 新增内容:Control Rig改良(实验版) Control Rig已经过优化,处理速度最高提升了 20%,内存用量最高减少了 75%!此外,添加这些功能后,整体用户体验也得到了改善: 值的可视化调试和直接操作 层级现在支持骨骼树、空间和控制,其中控制按钮代表绑定的3D用户界面。 我们提高了用户界面设计的灵活性,增加了Python脚本编写全覆盖,实现了工作流任务的自动化,扩展了对序列中的Control Rig的支持。 新增内容:带Control Rig的编辑器内动画(实验性) 用户可使用Control Rig在关卡编辑器中对绑定角色添加动画,然后将资产导出为动画序列在游戏中使用。在虚幻编辑器中即可创建、保存和显示动画。直接在引擎中创建这些动画也无需用到外部第三方程序。 新增内容:声场和终端副路混合(测试版) 在虚幻引擎中实现空间音频格式时,声场和终端副路混合(测试版) 可有效缩短开发时间,并提供更好的空间定位控制,打造出身临其境的游戏体验。 新增内容:默认副路混合资产化(测试版) 默认副路混合现可在音频设置中定义,可像其他副路混合资产一样处理,实现更快的迭代速度。到目前为止,默认音效副路混合(主音效、环境立体声、混响、EQ和混响插件副路混合)在UE4中皆为硬编码,意味着用户对这些副路混合添加效果的能力很有限。 新增内容:主/侧链压缩(测试版) 主/侧链压缩(测试版) 是一种改良方法,可以在听众需要集中注意力聆听特定声音时避开环境音效。例如当敌人靠近时,远处更响的手榴弹爆炸声不会掩盖掉敌人的脚步声。音效设计人员能够通过混音的动态范围进行更好的音效控制,并考虑实际的音频特性。 新增内容:转换为带子Actor的蓝图 在新版本中,用户能够使用关卡编辑器中的"转换为蓝图(Convert to Blueprint)"操作 将多个Actor转换为带子Actor的单一蓝图资产。可以使用新的 子Actor(Child Actors)选项创建单个蓝图资产,其中包含任意类的主Actor,且所有选定Actor的副本(以及任何修改的属性值)都通过子Actor组件附加到该资产。将此蓝图资产放入关卡后,所有选定Actor将生成为一个组。 新增内容:UObject属性优化 此为API更新,不使用源代码的最终用户可能不会注意到此更新。 插件开发者请阅读UProperty版本说明。 UProperty已重构为FProperty,意味着用UPROPERTY宏标记的属性不再承担作为UObject的开销。此重构拥有以下好处: 提高加载性能,在加载大量蓝图时尤其明显 提高垃圾回收性能,项目拥有大量蓝图时尤为如此 节约内存,在《堡垒之夜》中进行的测试显示内存节省量超过30 MB 性能改良包括: UObject迭代(要检查的对象更少) FProperty转换速度是UObject转换的三倍 FProperty迭代速度几乎是UProperty迭代速度的两倍。 新增内容:Visual Dataprep改良(测试版) Visual Dataprep系统已经过改良,初学者更容易上手,高级用户则能更灵活地加以运用。 图表编辑器现在拥有独特的外观和感觉,能够更好地反映其执行状况和数据流: 它强调操作的线性序列,而非连线式蓝图风格节点图表。视觉提示更清晰,帮助用户添加新的操作步骤,并用颜色区分过滤器块和运算符块。 此外: 已向蓝图和Python公开最常见的Dataprep操作。自定义编辑器脚本可通过Visual Dataprep系统驱动导入,甚至可使用操作、过滤器和运算符设置新的dataprep图表。 新增了一些 Select By 块,可按图层、顶点数或三角形数来过滤对象。 新增了几个 Operator 块,提供用于设置场景和资产的额外选项,比如向Actor的3D位置添加随机偏移的功能、在静态网格体中翻转三角形朝向的功能、替换资产引用功能,等等。 详情请参见Visual Dataprep。 新增内容:Datasmith对PLM XML的支持 大型CAD项目将Siemens PLM XML作为专业3D应用程序之间丰富互操作性的标准。虚幻引擎中现已添加实时可视化功能,用途也更加广泛。 Datasmith CAD导入器导入PLM XML文件,提供其为其他数据格式所提供的全部功能。导入流程从引用文件中自动导入几何体,保留场景层级和位置,并提供无损级重新导入功能,可保留在虚幻引擎中所做的更改。它将PLM XML用户数据转换为Datasmith元数据,甚至为各个PLM XML产品视图创建变体,以供变量管理器使用。 欲了解Datasmith的使用基础知识,以及导入进程的细节,请参见Unreal Datasmith。 新增内容:Datasmith交互操作改良 每个版本我们都会不断改良Datasmith对第三方应用程序和文件格式的兼容。 Rhino 新版本中,用户可以更好地控制虚幻引擎中使用的三角形网格体来渲染Rhino文件中的参数化表面。从Rhino文件导入场景时,可以为文件中的参数化表面选择两种不同的曲面细分策略: 可使用导入选项(Import Options)对话框中提供的曲面细分设置,通过Datasmith对所有参数化表面进行曲面细分。(仅4.24版有此选项。) 可重复使用之前由Rhino创建并存储在Rhino场景中的三角形网格体。若对Rhino渲染器中实现的曲面细分效果感到满意,此策略还可为虚幻引擎的实时可视化效果保留相同的曲面细分效果。 新版本中Datasmith会将Rhino点作为空Actor导入。此操作将保留这些点在场景层级中的位置,并可在Rhino场景中放置占位符和目标点,将它们带入实时显示效果。 新版本中,Datasmith将Rhino对象的相关技术元数据作为Actor上的标签导入。 详情请参见使用Datasmith和Rhino。 Revit 在新版本中,可以将Datasmith导出整合到Dynamo脚本中,从而更好地控制设计实时可视化效果的自动生成流程。Revit的Datasmith导出插件现在包含用于Dynamo的插件:DatasmithDynamoNode.dll。此插件包含新的 DatasmithDynamoNode.Export3DViewsToDatasmith 节点,可用于将多个3D视图从Revit批量导出到.udatasmith文件。 用户可以指定要导出的Revit文档、生成.udatasmith文件的路径、要导出的3D视图的ID,以及Revit生成的三角形网格体所需的曲面细分等级。 我们还进行了很多其他修复和改良: 对于在Revit有起始点的实体(如墙壁、栏杆、管道和线路),现在其枢轴已放在虚幻引擎中的正确位置。 在新版本中,除了Revit项目文件(.rvt)之外,还可将Revit族系文件(.rfa)导出到.udatasmith文件。 导入的纹理现在在内容浏览器中将正确进行标记,对应其Revit源。同样,用于幕墙面板等对象的静态网格体组件也在命名方面有所改良。 新版本中,来自Revit的PBR材质类在导入时可保持其视觉属性的完整,而非一律变成灰色。 新版本中,虚幻引擎中的Actor可以存储标签,用于表明Revit中的相应对象为翻转或镜像。这有助于用户对需要特殊处理的翻转或镜像对象进行自动化dataprep操作。 详情请参见使用Datasmith和Revit。 新增内容:变体管理器改良 变体管理器可帮助用户提前设置关卡的多个不同变体,并在运行时在这些变体之间进行切换。新版本对变体管理器UI进行了几处易用性改良,以便根据需要设置变体: 现可将外部图像文件设为变体的缩略图。 可置换出变体中的任何绑定Actor,替换为另一个Actor。 可将任何Actor的已采集属性拖动到新位置,对其重新排序。这有助于让最关键的属性轻松地显示出来。 若采集颜色属性,现可使用取色器更轻松地设置属性值。 现可从过场动画摄像机Actor中采集更多摄像机设置。 设置变体在激活时调用蓝图函数后,该函数现在会接收来自情境的额外参数:关卡变体集资产、被激活的变体,以及拥有已激活变体的变体集。此情境有助于设置复杂的响应。举例而言,有时可能需要激活其他有依赖性的变体。 新增内容:实时USD改良(测试版) 提升了虚幻编辑器对来自Pixar的通用场景描述(USD)交换格式的支持力度,可利用选择的场景构建和动画工具更轻松地打造资产管理流程。在新版本中: 将USD数据导入实时USD阶段更快、响应更灵敏,采用多线程导入且渲染性能更佳。 现在每个USD图层都将进入自身的Sequencer轨迹,便于单独处理每一层的对象和动画。 导入器将把场景对象的USD用途导入虚幻引擎中。在编辑器中,可在USD支持的不同用途(默认、渲染、代理 和 指南)之间切换场景的视觉效果。 该导入器现支持USD点实例化器,用静态网格体实例自动设置虚幻引擎场景,与点实例化器创建的各个对象匹配。 可将虚幻引擎材质指定到USD基元。 USD Python API现在在虚幻编辑器中公开。可以使用它在Python编辑器脚本中控制USD场景。 若使用自定义USD模式,用户可连接自定义C++回调函数来处理自己的数据结构。 如需了解相关概述和使用说明,请参见UE4中的通用场景描述。 新增内容:编辑器性能改良 我们每发布一个新版本都会显著提升性能,尤其是提升处理大场景时的性能。4.25版中加入的性能改良有: 在部分情况下,通过Incredibuild进行的着色器编译效率提升90%。我们用内部设置进行测试时,发布着色器的编译时间已缩短五倍。 代理网格体创建流程可以更好地根据CPU数量进行调整,复杂设置下也能将处理时间轻松缩短三倍以上。 在部分情况下,大场景上的批凸包计算较旧版提速100倍之多,缩减4倍内存用量,将数小时的处理时间缩短为数分钟。 材质烘焙改良后,处理速度快14倍,内存用量减少3倍。性能提升各有不同,具体取决于使用的GPU后端。 Datasmith处理数据重导入改良后,重导入时间和内存用量都有了可观减少。 Datasmith CAD场景的保存方式已改良。新CAD场景的保存和加载只需数秒即可完成。 Datasmith VRED及DeltaGen FBX导入工作流在处理大型FBX文件时速度更快。 距离场在多核CPU上的计算效率更高,处理时间将缩短5倍,核心优化更佳。 将影片成片导出为PNG文件的速度得到了前所未有的提升。在计算机上检测到此功能时,其将使用AVX-2之类的新指令集功能。新版本中,可以在16核CPU上将4K视频实时导出为PNG文件。 新增内容:灾难复原改良 最新版本的灾难复原使用的内存更少,处理的资产更大(大于2 GB)、处理速度更快。新版本中加入了新的复原中心(Windows > 开发者工具(Developer Tools) > 恢复中心(Recovery Hub)),删除现有的恢复会话。因为恢复会话可能较大,新版本添加了新设置,以便用户选择保存恢复会话的位置。 新增内容:编辑器涂绘模式改良(测试版) 网格体涂绘系统已得到改良,可提供更流畅的工作流程,并为高级团队提供更高的生产效率和扩展性。上个版本的网格体涂绘工具集功能得到保留,并新增了一些优点: UI更简洁。 不依赖于物理追踪。 (高级用户)可以扩展顶点绘制,以便绘制其他顶点数据 新版本中,模式选择位于工具栏中,而并非面板上的选项卡。进入网格体涂绘模式后,选择工具将独立于涂绘工具。 VR模式不支持这些功能。 新增内容:视口中建模和造型改良(实验版) 我们在各个版本中不断对在虚幻编辑器视口中直接处理几何体的实验工具集进行重大改良,目标是实现完善、端到端的几何体建模和自适应网格体造型工作流程。 本版本中还加入了几个网格体修复工具,旨在修复导入资产涉及的小问题,尤其是那些来自CAD应用程序的问题。这些工具可以填补漏洞,相对于几何体移动枢轴点,以及交互式地生成和调整UV贴图。 新增内容:移动设备的默认渲染特征等级 新版本中,OpenGL ES3.1 和 Metal 2.0 分别是Android和iOS项目的默认特征等级。Vulkan、ES3.1和Metal 2.0都有类似的特性,能够在移动平台之间建立更好的特性对等。我们还可借此利用先进的移动技术,建立与台式机/主机之间更好的特性对等。 OpenGL ES3.1和Metal 2.0在新的Android和iOS项目中将默认启用,可在虚幻编辑器中将其设为预览特征等级。从旧版本虚幻引擎中迁移过来的项目会默认升级到这些特征等级。 OpenGL ES2和Metal 1.2的渲染特征等级已从虚幻引擎中移除。 新增内容:Android NDK 21b更新 虚幻引擎4.25版将更新至 Android NDK 21b。它提供大量Android工具链优化和bug修复;使用最新NDK版本的Android库现将兼容虚幻引擎。 为促成这一变化,虚幻引擎现在需要使用新的Android项目安装进程。现在需要从Android开发者存档下载并安装 [Android Studio 3.5.3,而不使用Codeworks for Android。完成安装后,在 Engine/Extras/Android 文件夹内找到 SetupAndroid 脚本,运行它来完成此流程。此脚本有相应的Windows、MacOS和Linux版本。 若已为虚幻引擎的上个版本安装CodeWorks,建议在使用SetupAndroid脚本之前建议先将其卸载。运行CodeWorks卸载程序,然后删除安装CodeWorks的目录即可完成卸载。 如果你需要支持更早版本的NDK,以便使用之前安装的虚幻引擎,请使用Android Studio安装NDK 14b并根据你的安卓项目设置手动选择NDK路径。关于此类支持的更多信息,请参考我们的关于NDK 21更新的博文。注意,只有在你需要同时支持4.25及更早版本的虚幻引擎时才需要这么做。如果你不打算将项目迁移到4.25,可以继续放心使用Codeworks安装程序。 虽然4.25的预览版本使用了NDK 21,但其发行版本现在使用NDK 21b了。如果你已经设置了NDK 21,可以在更新到4.25.0后再次运行SetupAndroid以下载必须的组件。 新增内容:iOS启动故事板 新版本中,虚幻引擎可以使用 iOS启动故事板(iOS Launch Storyboards) 作为iOS上的启动画面。启动故事板支持动画,还可根据用户屏幕大小动态调整内容,提供更佳的启动体验。 要使用启动故事板,请在 XCode 中创建 LaunchScreen.storyboard,并将其放在项目的 Engine/Build/IOS/Resources/Interface/ 文件夹中。之后,在 项目设置(Project Settings) > 平台(Platforms) > iOS 中启用 故事板(Storyboard) 选项。已创建的静态启动画面将不会再打包到iOS版本中。 为了方便需要占位符的项目使用,引擎会为没有启用自定义故事板的游戏自动打包一个默认故事板。你可以用静态启动图片替换此故事板,只需将图片放到 启动屏幕图片(Launch Screen Image)。该图片必须为没有alpha通道的PNG图片。同时为了尽可能兼容不同的纵横比和布局,它的长宽都必须为平方值。 自iOS 13开始,苹果要求开发者使用启动故事板,且不再接受带静态启动画面的送审内容。欲知详情,请参见Apple官方启动画面和故事板文档。 新增内容:Android App Bundles Android App Bundles 是Google Play商店的一种新上传格式。用户可上传单个AAB文件,不必为每种目标设备配置编译、签署和管理单独APK。然后,Google Play的动态交付系统使用该AAB自动生成针对终端用户设备配置的最终优化APK。这极大简化了为Google Play商店准备版本的过程。此外,通过App Bundles版本发布的最终APK的最大容量可达150 Mb,而非先前的100 Mb。注意,不使用App Bundles的项目容量限制仍为100 Mb。 使用App Bundles的方法是:在 平台(Platforms) > Android > App Bundles 下的 项目设置(Project Settings) 中启用 生成包(AAB)(Generate bundle (AAB))。在为Android平台打包项目时,它会将其打包为AAB,而不是APK。虚幻引擎还会自动生成通用APK,以供测试使用。 App Bundles目前不支持使用Android扩展文件(.OBBs)。 新增内容:Android交换帧同步器(测试版) 虚幻引擎现在集成了Google的Android游戏SDK中的 交换帧同步解决方案,以改进虚幻引擎游戏的渲染处理和Androi显示管线间的同步性。此功能将使Android游戏的帧率更稳定,触屏输入的延迟也将得到改善。 要为Android项目启用交换功能,请在想要使用的设备概述文件中添加配置变量a.UseSwappyForFramePacing=1,并使用r.setframepace [value]将帧同步的刷新率设为想要的值。 新增内容:移动设备上的虚拟纹理(实验版) 虚拟纹理 目前在移动设备上尚属实验性功能。使用虚拟纹理时可遵循与PC项目相同的步骤。详见虚拟纹理 documentation文档。 运行时虚拟纹理目前不使用纹理压缩,这意味使用的内存大约是预期的4倍。此外,为确保项目有必要的资产来利用虚拟纹理,目前推荐使用兼容ES3.1或Vulkan的Android设备、iPhone 8或更新版本。 新增内容:移动设备上的眼部适应(测试版) 眼部适应 或 自动曝光 现在不但可用于PC和主机平台,还可用于移动平台项目。开发者可利用此功能重新创建人眼适应亮度变化的效果。 要使用眼部适应,必须启用 MobileHDR 来开启移动设备上的后期处理。可在 引擎(Engine) > 渲染(Rendering) > 移动(Mobile) 下的 项目设置(Project Settings) 中进行启用。 还必须将r.EyeAdaptationQuality和r.Mobile.EyeAdaptation这两个控制台变量设为大于0的值,以启用眼部适应。r.EyeAdaptationQuality变量通过sg.PostProcessQuality伸缩性设置进行配置,该设置位于[PostProcessQuality]部分的BaseScalability.ini中。r.Mobile.EyeAdaptation变量则是直接在BaseDeviceProfiles.ini中的设备概述文件中设置。它默认在Android_Low及Android_Mid设备描述中禁用,以及在部分低端iOS设备上禁用。 虽然眼部适应的性能开销不会很高,但若需要禁用,可将r.Mobile.EyeAdaptation设为0。 新增内容:Niagara模拟和迭代阶段(实验版) 模拟阶段会让一组模块在下一阶段模块之前对所有粒子进行计算。因此在下一个算法步骤(比如约束求解、高度场水模拟和基于位置的动态模拟)开始之前,模拟需要更新所有粒子。 迭代阶段是一个新概念,其中调度数由特定数据接口驱动,而非由粒子数量驱动。还有几个数据接口利用了这一概念,例如Grid2D集合,它修改了2D流体等模拟的2D网格数据。 模拟阶段和迭代阶段都属实验性功能,目前仅适用于使用GPU模拟的发射器。 新增内容:眼部适应(自动曝光)改良 新版本统一了现有参数和设置,对眼部适应(也称自动曝光)进行了重大调整,使其更加易用。 新版本添加了以下改良和功能: 统一了柱状图和基础模式的统一测光模式,可汇聚到中间的灰点,而非在柱状图上形成单独的汇聚点。 新增曝光测光遮罩,使用纹理遮罩控制像素对画面的影响。 使用手动测光模式时可切换至调整物理摄像机曝光(Adjust Physical Camera Exposure),以便在使用曝光时显式使用摄像机后期处理(Camera Post Processing)设置。 改良了HDR(眼部适应)可视化模式,包含显式后期处理设置信息和柱状图信息。 另外还添加了一种新的调试可视化模式,用于识别极亮和极暗的像素,以便将其排除在HDR计算之外。 改良了HDR(眼部适应)可视化模式的曝光补偿曲线,以明确曲线X轴值和Y轴值的起源处。 更改对曝光速度的感知,便于以相同速度沿曲线上移或下移。 这些修改确实打破了使用虚幻引擎旧版本的项目的向后兼容性。虽然有升级途径,但却无法保证项目 外观 得以保留。可参阅我们的技术博文"Epic Games在4.25版中处理自动曝光的方法"。其中包含这些修改的详情,以及它们对项目的影响。 欲知详情,请参阅眼部适应(自动曝光)。 新增内容:毛发和皮毛渲染更新(测试版) 我们一直在研究毛发渲染和模拟功能,现在已进入测试版功能开发阶段。新版本对工作流程、性能、光照等方面进行了大量改良。 工作流程改良: 现在,只要网格体拥有相同的UV空间,Groom便可在这些网格体之间转移。 此外还提供其他Groom组件选项,用于控制根部和梢部的比例,以及发夹的长度。 Niagara模拟参数现可直接从Groom组件中进行编辑。 新版本提供了一些基于曲线的属性,用于控制沿发束的Niagara参数值。 在新版本中导入复杂的groom资产时还将提供资产验证和时间估计。 更好地处理物理资产插值,从而改良对形体的碰撞检测。 新增替换插值法,此法使用径向基函数,而非三角形刚性变换。 性能改良: 毛发模拟和渲染的稳定性和速度得到整体提高,其中包括光线追踪功能的使用。 新版本使用连续的细节等级(LOD)系统,其会在其屏幕覆盖范围缩小时大幅降低groom。 簇剔除用于将groom细分为更小的簇,然后根据层级深度缓冲进行遮挡剔除。 Groom资产现在支持派生数据缓存(DDC),可加快关卡加载速度。 光照改良: 毛发系统使用GPU驱动的稀疏体素化结构,无需使用深度不透明度贴图即可实现更精确的光照。它还能实现更薄的体素化。 光照通道现在支持Groom组件。 新增内容:天空大气改良 新版本中增加了强大功能,可根据物理原理创建大型大气,天空大气组件因此得到了进一步改良。 本版本中有以下改良: 新版本可逐像素计算透光率,确保从太空中查看时行星表面的亮度正确。 行星大气现可在附近的行星体上投射阴影,例如邻近的卫星。 大气现可在关卡中自由移动。 新增内容:材质层改良(测试版) 新版本中材质层已更新,蓝图函数可在运行时修改材质层参数,将图层修改从基本材质传播到派生材质时的行为同样也得到了改良。 新增内容:内存着色器加载时间重构 着色器相关资源已通过新的反射系统转换为扁平内存图像,以便直接从硬盘到内存进行反序列化,而无需使用CPU。反射系统能以健壮的方式处理编译器之间的内存布局差异,显著提高引擎的加载时间性能。 新增内容:向Sequencer添加空间化主音频 新版本中,用户可将音频分段附加到指定时间处的各个音轨组件,向序列添加空间化主音频。在视口中的附加点处绘制音频图标,帮助显示空间化效果。 新增内容:摄像机切换混合 新版本中,摄像机切换轨迹支持混合。用户可使用此功能轻松混入和混出过场动画,或在不同过场动画切换之间混合。启用混合的方法是右键单击Camera Cut轨迹节点并选中 可混合(Can Blend)。 新增内容:镜头试拍录制器增强 新版本提供了一些新增强,简化镜头试拍录制器的使用,同时改善整体用户体验。 指定帧率 现在可针对特定序列选择所需的录制帧率。支持帧率范围为 12 fps(动画) 到 240 fps。默认帧率基于时间码提供方的帧率。 在新版本中,录制数据在录制时将转为匹配时间码 在新版本中,关键帧数据被采集时录制序列将显示时间码。包含每个采样时间码的Live Link源将直接映射到带时间码的序列。就分层录制而言,如果从之前的镜头试拍进行录制,现有轨迹将偏移到录制开始时的新时间码。举例而言,用户便能在之前录制的情境中录制新的相机移动。此外,数据将直接序列化,所有浮点通道的保存时间将提升五倍。这点在录制拥有多个actor的长镜头时尤为重要。 新增内容:模板序列(测试版) 用户可以使用模板序列在多个对象上轻松重复使用轨迹。其还将减少整体的资产复制与保存时间。使用与骨骼动画相似的单根绑定,可将模板序列添加到另一个序列的轨迹,并像其他动画那样进行操纵。 新增内容:HoloLens 2和协作查看器模板 协作查看器模板现支持在HoloLens 2中查看设计内容。 新增内容:产品配置器模板 在新模板中构建新的产品配置器,先人一步。此模板包含内容范例(其展示以多个场景配置设置的变体管理器)和一组可重复使用的UMG控件、这些控件自动构建UI来公开这些配置,以便用户在运行时进行控制。 可在 汽车、产品设计和制造(Automotive, Product Design, and Manufacturing) 类目下找到 产品配置器(Product Configurator) 模板。 新增内容:摄像机内视效更新(测试版) 自摄像机内视效(In-Camera VFX)工具首次推出以来,我们和众多合作伙伴共同努力将虚幻引擎用于现实世界的影视开发项目,在LED屏幕前拍摄真实动作场景。我们汲取了这些经验,并以此对现有工作流程和工具进行了一些改良: 支持多个摄像机同时对着同一屏幕进行拍摄,并将多个虚幻摄像机的视锥渲染到LED墙的不同部分。 可以控制摄像机视锥内外区域的分辨率。可降低LED墙在物理摄像机画面以外区域的分辨率,以此提高性能,而摄像机画面区域之中的渲染质量不会因此受损。 绿屏追踪标记更易于设置,在后期制作中也更稳定(因为它们不会在屏幕上漂浮或移动)。 现在用户能更便利地设置三层合成,在真人演员或布景上合成虚拟前景元素。还能更便利地控制应在虚幻引擎中LED墙上隐去的图层。 可在关卡中放置新的颜色校正体积(将对场景应用局部颜色校正),利用更精细的美术掌控让虚拟场景达到真实世界布景的效果。 可旋转nDisplay生成的渲染输出,增加构建LED墙的选择。 新增内容:nDisplay改良 nDisplay多显示渲染技术是摄像机内视效(In-Camera VFX)系统的一个重要组成部分,支持上述诸多改良。此外还有一些改良能让所有nDisplay用户受益: 现可指定各个nDisplay簇节点,以使用特定GPU进行渲染。 可选择让nDisplay使用NVIDIA硬件提供的帧同步服务,在簇节点网络上同步每个连续帧的呈现。 用户可更轻松地设置nDisplay来渲染到曲面。不再使用PFM或MPCDI来配置nDisplay如何弯曲最终输出以将其投射到曲面上,而是实施新的投影策略,该策略根据在nDisplay蓝图API中指定的网格体自动弯曲渲染的输出。 新增内容:运动图形改良 新版本中,用户可以直接在虚幻编辑器中轻松创建有趣的运动图形,无需切换到外部设计工具。用户可更快地完成屏幕上的图形,并在虚拟世界背景下进行合理设计。 3D文本动画(测试版):用户现在可以利用全新的 文本3D字符变换(Text 3D Character Transform) 组件,使用Sequencer对3D文本对象的每个字符添加平移、缩放和旋转动画,而非将整行文本作为一个单元添加动画。 表格和图表(实验版):现在用户可以直接在虚幻引擎中设置统计数据的显示。新的 数据图表(Data Charts) 插件提供预先制作的蓝图,可从数据表资产中提取统计数据,并将其中的数字呈现为条形图、饼状图或图表。 新增内容:直播和实况工具 此版本加入了一些新功能和改良,可以在直播环境下更轻松地开发和使用虚拟布景、合成视频源和增强现实(AR)。 新版本添加了以下改良和功能: 合成版(Composite Plate) 是一种摄像机蓝图,可将视频源投射到平面或场景中的一组静态网格体对象上。主播可利用此功能通过虚拟方式将自己的才艺置于3D场景中的任何指定深度,而无需选择将物体标记为前景或是背景。此技术还可用于模拟完整的环境,方法是从给定的角度拍摄真实场景中的位置,然后将拍摄的图像或视频投影到场景几何体上。这两种技术都十分有效,因为从相机的角度看,合成版是被投影到3D空间中的几何体上。 对 绿屏键控器 进行改良,提升了现有色度键控系统。 新增内容:DMX支持(实验版) 此版本加入的实验版功能之一,支持将虚幻引擎连接到使用DMX协议的第三方控制器和设备。通过ArtNet和aSCN网络上的双向通信和交互,可从虚幻引擎中控制舞台表演和光照设备,并在设计阶段在虚拟环境中预先显示表演效果。 新增内容:Magic Leap改良 此项新增功能面向Magic Leap开发者,支持使用新GameMode、PlayerController和GameState类设置共享场景体验。在API方面,改良为了内容的持久性添加了更多工具API,为连接和摄像机内部函数添加了API集成。此外还改良了AugmentedReality接口,便于手持式AR项目迁移到Magic Leap,并提高了零迭代的性能和稳定性。为便于调试和优化,我们添加了Visual Studio调试器对纯蓝图项目的支持,并支持通过配置变量来修改线程亲疏度。 新增内容:支持SteamAudio的动态几何体 现在可以在虚幻中使用SteamAudio的动态几何体了。单个静态网格体组件可以单独移动、旋转和缩放,Steam Audio会随之更新音效。关于该功能的更多详情,请点此查看SteamAudio插件和演示视频。 新增内容:平台SDK升级 在每个版本中,我们都会更新引擎来支持平台合作伙伴的最新SDK版本。 Windows 推荐: Visual Studio 2019 v16.5 最低: Visual Studio 2017 v15.6 Windows SDK 10.0.18362 NET 4.6.2 Targeting Pack Build farm编译针对的IDE版本 Visual Studio - Visual Studio 2017 v15.9.4工具链(14.16.27023)与Windows 10 SDK(10.0.18362.0) 最低支持版本 Visual Studio 2017 v15.6 需要NET 4.6.2 Targeting Pack Xcode - Xcode 11.1 Android Android Studio 3.5.3 Android NDK r21 同样支持NDK r20b,并处理部分低端设备上的兼容性问题。 ARCore 1.7 ARKit 3.0 Linux "SDK"(交叉工具链) Oculus 1.44 API Level 23 OpenXR 1.0 Google Stadia 1.44 Lumin 0.23 Steam 1.46 SteamVR 1.5.17 Switch SDK 9.3.1 + 可选NEX 4.6.3(固件版本:9.0.1-1.0) 支持 IDE:Visual Studio 2017、Visual Studio 2015 PS4 Orbis SDK 7.008.001 系统软件7.008.021 支持 IDE:Visual Studio 2017、Visual Studio 2015 XboxOne XDK:2018年7月 QFE-9 固件版本:2月恢复版10.0.18363.9135 支持 IDE:Visual Studio 2017 macOS SDK 10.15 iOS SDK 13, 12, and 11 tvOS SDK 13, 12, and 11 主要升级说明 烘焙数据兼容性 虚幻引擎4.25版中加入了一些主要序列化修改,以改良加载时间。这些修改的负面效果是 烘焙数据在Win32和Win64可执行文件之间不再兼容。项目需要烘焙两套数据来支持两种编译类型。然而,64位Windows仍将运行Win32可执行文件,因此部分需要支持32位Windows的项目可以选择只发布一个Win32版本。 加载时间改良 由于UObject的优化、常规渲染和物理优化以及材质和着色器的序列化开销降低,各平台的用户将体验到10-20%的加载时间改良(具体改进幅度因平台而异)。 Win32支持 我们将正式废弃Win32支持。 在虚幻引擎4.25和4.26版本中仍会支持32位Windows操作系统,但对这将在之后的版本中被移除。 GitHub版本分支 如果使用GitHub来访问虚幻引擎的源代码,您可能会看到两个单独的分支:4.25和4.25-Plus。 我们 建议大家使用4.25分支,除非Epic特别告知您使用4.25-Plus进行开发。 移动平台 OpenGL ES2和Metal 1.2的渲染特征级别已移除。 XR 运动控制器按键已在4.24中废弃,在4.25中完全移除。 Release Notes AI Bug Fix: UAIDataProvider_QueryParams: Fixed data binding for Boolean values. New: UEnvQueryTest_GameplayTags.SetTagQueryToMatch was added to support the EQS test class in native code. New: Added a Project option to tell the AI Perception System to forget Actors with perception past the Max Age duration. New: AIPerceptionComponent: Added functionality to allow clearing of perception data by making ForgetAll callable from Blueprint. New: UEnvQueryInstanceBlueprintWrapper.OnQueryFinishedEvent can now be accessed from native code by calling the GetOnQueryFinishedEvent function. New: SpawnAIFromClassnow allows callers to specify the Owner of the spawned AI agent. Behavior Tree Bug Fix: Behavior Tree restarts that were caused by execution requests queued for branches getting deactivated while applying search data can now be prevented. New: Blueprint-implemented Behavior Tree nodes can now specify a custom name that will appear on the nodes in the Behavior Tree Editor. New: Added functionality to allow custom BrainComponents to run, as well as allow BrainComponents to be set up as components on AIController and run automatically when Actor is possessed by the AIController. New: Added functionality to support the use of UBlackboardKeySelectoroutside Behavior Tree Editor. Debugging Tools New: Navigation path debug drawing now uses NavTestRenderingComponent to draw arrowheads indicating the path direction. New: Corrected minor typographic errors in Gameplay Debugger user interface. New: While using Simulate in Editor, Actor filtering in UGameplayDebuggerLocalController now allows non-Pawn AActor selection. New: Added new GameplayDebugger console commands: EnableGDT(for legacy support) gdt.Toggle gdt.SelectPreviousRow gdt.SelectNextRow gdt.ToggleCategory New: Verbose logging for start and stop queries were added to EnvQueryManager. Navigation Bug Fix: UPathFollowingComponent::RequestMoveWithImmediateFinish and UPathFollowingComponent::AbortMove were made consistent in terms of handling given move requests' bStopMovementOnFinish value. Bug Fix: Fixed how ARecastNavMesh.DefaultMaxSearchNodes gets applied to make it consistent. Bug Fix: Fixed FNavAgentProperties::IsNavDataMatching to make it respect null-navigation-data properly. Bug Fix: Fixed how NavigationSystem rebuilds Static and DynamicModifiersOnly NavData in Play in Editor and Game modes, if bAutoCreateNavigationDatais set to False. Bug Fix: Fixed ARecastNavMesh::ReplaceAreaInTileBounds to properly apply the requested new area type. Bug Fix: Fixed a navigation filter bug resulting in it not blocking null-area navigation links. Bug Fix: Fixed navmesh polygons so they can connect diagonally across tile boundaries. Bug Fix: Fixed a bug in NavigationSystem so that registration of CustomNavLink from a different world is now prevented while in Play in Editor. Multiple NavigationSystems may coexist (For example, Editor, Client Game, and Server Game worlds) so now any given NavigationSystem instance performs a single flush of the global pending queue to register the links associated with their outer World. Bug Fix: Fixed a navmesh rasterization issue, where a wrong offset on the vertical axis caused some voxels to be rasterized outside of their source geometry. This also addresses a side effect where a NavModifier aligned with geometry would not mark the navmesh, even when taking cell height into account. Bug Fix: Fixed Chaos convex mesh indices winding so that triangle normals point outward for nav collision. Bug Fix: Fixed rasterization area merging when adding new spans. Navmesh now generates correctly on a slope. Bug Fix: Fixed a NavigationTestingActor path reset issue when toggling bSearchStart. Bug Fix: Fixed updateSlicedFindPath not behaving like findPath (missing check for DT_UNWALKABLE_POLY_COST). New: Added support for a cost limit in Detour pathfinders and added a parameter in the navigation query, DetourNavMeshQuery. New: Added a minimum CostLimit value to FPathFindingQuery::ComputeCostLimitFromHeuristic. Exposed the minimum CostLimit parameter in NavigationTestingActor. New: Added an option to prevent navmesh generation under terrain. New: Added LogNavigationDataBuild to output a basic build summary in the log. New: Added an option to the ResavePackagesCommandlet to rebuild navigation data. This option will check out, save and check in dirty packages only. New: Added a callable-in-editor function for copying endpoints from NavLinkProxy's simple links over to smart link. New: NavMesh links debug drawing was streamlined to more easily distinguish active links from inactive ones. Animation Bug Fix: BeginPlay() is now called appropriately on Anim Instances linked to Anim Notifies. Bug Fix: Curve compression now prevents curves from collapsing into a single value when using all constant keys. Crash Fix: Control Rig nodes are no longer evaluated twice in Animation Tracing. Crash Fix: Initialization of linked layer/graph nodes no longer crashes when the Anim Instance is re-used. Crash Fix: Anim Instances using linked Anim Instances no longer crash on re-use. Crash Fix: Calling undo after adding a slot in Anim Montage Editor no longer crashes. New: Animation Notifies are now optionally propagated to and from linked Anim Instances with propagation options for each linked Anim Graph/Layer node. Grouped Layer nodes propagate as a whole; so, if any grouped Layer nodes propagate, all nodes in the group will propagate as well. Crash Fix: Removed use of AllSortedComponentData in Animation Budget Allocator Debug HUD that was causing a crash. Bug Fix: Added extern keyword to VirtualBonePrefix declaration to prevent it from ending up in every .cpp file. Crash Fix: Remapping an additive animation no longer crashes. Bug Fix: Deleting virtual bones no longer displays the "GetBoneMatrix(x) out of range of Spacebases" message. Bug Fix: Linked instances are now cleaned up properly on unlink and reset, ensuring that Anim Notify states are fired off correctly and any playing Anim Montages are correctly terminated. Bug Fix: Scrubbing an Animation in Sequencer no longer crashes. Bug Fix: Fast-path is no longer incorrectly disabled for array properties. Bug Fix: Const transform accessors in FAnimInstanceProxy are now correctly marked. Bug Fix: Reloading Skeleton with unsaved virtual bone changes no longer causes an ensure. Bug Fix: Stepping through a geometry cache now shows the frame corresponding to the time set in sequencer. New: Animation compression is now exposed to plugins and uses Compression Assets in place of Compression Schemes. New: Animation compression is now performed asynchronously when opening the Unreal Editor, similar to the way shaders are compiled. New: Gameplay Abilities can now specify the Anim Instance on which to play a montage. New: There is a new plugin to support reading Live Link Animation and Transform data inside of Control Rig. New: There is a new Animation Blueprint event callback that is called when the Linked Animation Layers are initially initialized. New: CSV Animation stats are updated and expanded. New: Error messaging around the default curve compression asset has been improved. New: Added ISPC optimization for Animation for Windows, Linux, Mac and Android platforms. Bug Fix: Cooking of conformed-on-load interface graphs is now deterministic. API Change: Anim Nodes now name their node properties without using the node GUID. Animation Assets Crash Fix: Calling undo after editing a curve no longer crashes. Crash Fix: Deleting a Pose before base Pose no longer crashes. Crash Fix: Removing a curve through an Animation Modifier no longer crashes. Bug Fix: Loading cooked Animations in Unreal Editor no longer causes an ensure. Bug Fix: Cooking with smart names on USkeletons is now deterministic. Bug Fix: Montage Sections panel now scrolls when it gets too large. Bug Fix: Right-click dragging to scroll on Anim Notify tracks now works correctly. Animation Blueprints Crash Fix: Doing a fast-path copy of FName values in FExposedValueHandler no longer crashes due to differing size of the type between compile and run time. Ensure that non-compatible types always skip the fast copy path. Crash Fix: Moving or renaming an Anim Blueprint no longer crashes. Bug Fix: Fix compilation crash after adding an animation Blueprint interface with duplicate names Bug Fix: Implemented several fixes for the RandomPlayer animation BP Node: Fixed a problem where we would advance twice during the update loop if we both exceeded the loop count and blending had finished, resulting in a possible double play of the same animation in shuffle mode. Related to the double-advance problem, changed the shuffle generation so that it's done after popping the last shuffle value, to hint to it directly which value was the last one to be used so that the permutations are non-contiguous. Ignore empty play entries and entries with zero play chances in random mode. Ensure we play each sequence fully, instead of just the portion from blend stop to end. This eliminates the "popping" effect when transitioning between animations. Bug Fix: Derived classes of FAnimNode_StateMachine no longer fail to compile in Animation Blueprints. Bug Fix: Changing the Widget Mode on a bone in the Anim Blueprint Editor no longer causes an infinite loop. Bug Fix: State machine recorded weights are now correctly reported. Bug Fix: Split pins now work correctly in Animation State Machine states. Bug Fix: The Control Rig Anim Node ticking no longer ticks the sub tree multiple times. Bug Fix: Control Rig now initializes correctly from Sequencer. Bug Fix: There are no longer duplicate Hide Unrelated nodes in the Animation Blueprint Editor. New: Reroute nodes for Pose links now use a Pose icon. New: ApplyMeshSpaceAdditive now has extended alpha support including support for boolean and curve controlled alphas with clamping. New: Added Blend Space node debug widgets to display the current sample of the Blend Space when debugging. API Change: The interpolating solver is available as a templated helper class, called TRBFInterpolator in RBF/RBFInterpolator.h. It can be used to smoothly interpolate any N-dimensional target value for any M-dimensional input values, assuming a suitable distance metric exists. Animation Tools Crash Fix: Deleting curve keys in Animation Curve Editor no longer causes a crash. Crash Fix: Interacting with the Animation Insights transport controls when a session is tracing to an external application no longer crashes. Crash Fix: Calling undo after adding an Anim Montage segment no longer crashes. Bug Fix: PostUpdateAnimation is now called on linked instances if NeedsUpdate on the main instance returns false. Bug Fix: Animation Curve Editor auto-zoom now works consistently. Bug Fix: Renamed curves displaying now display the correct name when edited directly. Bug Fix: Highlight of Loop button in Anim Editor is no longer triggered by Reverse button. Bug Fix: Linked-instance asset players are no longer ticked twice when a post process graph is enabled. Bug Fix: Undo/redo no longer closes Details panels for Anim Notifies Bug Fix: Renaming curves in the Animation Timeline now marks the asset as dirty and can be undone/redone. Bug Fix: Reverse playback of an Anim Montage now sets the play rate of the Anim Montage instance correctly. Bug Fix: Sync marker selection has been improved. \ Bug Fix: Reported number of frames in the Montage Editor is now accurate. Bug Fix: Anim Notifies can no longer be dragged outside of the notify area. Bug Fix: Editing the section time of an Anim Montage no longer crashes. Bug Fix: Anim Montage sections now refresh correctly when edited via the Details panel. Bug Fix: Anim Sequence frames can now be added from the Curve Editor view. Bug Fix: Anim timeline now allows access to times outside the Anim Sequence range. Bug Fix: Right-click dragging on an Anim Notify track to scroll no longer flickers. DPI scaling is now correct when drag/dropping Anim Notifies. New: Unreal Insights channel filtering plugin now has a "Trace Data Filtering" Tab which allows for setting individual Trace Channel states and allows for specifying Presets, or groups of Channels that should be enabled together. New: Implemented ControlRigLayerInstance to support Sequencer layering using ControlRig Tracks New: Implemented Additive Control Rig to be used in Sequencer for layering/tweaking Animations. New: Added simple multi effector FABRIK rig unit. Import/Export Bug Fix: Bones metadata from FBX are now transferred to the Anim Sequence at import. Skeletal Mesh Crash Fix: Calling USkinnedMeshComponent::SetSkeletalMesh() after destroying its ChildPoseComponent no longer crashes. Bug Fix: The SetVertexColor engine test no longer fails when skin cache is enabled. Bug Fix: The AlternativeSkinWeights engine test no longer fails when using skin cache mode. New: The maximum number of bone indices allowed per Skeletal Mesh section is increased from 255 to 65535. New: The maximum number of bone influences per vertex is now 12. New: Support for variable influences per vertex has been added and is used by default for meshes with more than 8 bone influences. New: Vertex color is now used as a factor when blending between original tangents and recomputed tangents when Skeletal Meshes are rendered via skin cache. New: Added Niagara support of unlimited bone influences per vertex for Skeletal Mesh rendering. Audio Crash Fix: Fixed a crash that occurred when attempting to set a delay line for a submix effect length beyond a statically allocated maximum length. Crash Fix: No longer crashes when the maximum for audio sources is reached and stopping sources are enforced but not observed by spatialization/modulation plugins. Crash Fix: Fixed a memory scribble/crash in the OSC plugin when passing empty strings as a payload in message members, or when sending messages larger than 1024 bytes. Crash Fix: Fixed a rare crash due to a race condition when stream caching is enabled and stat audio streaming is turned on. Crash Fix: Fixed a crash that occurred when FAudioCaptureAndroidStream::CloseStream was called on a stream that was already opened. Crash Fix: Fixed a rare crash in the Google Resonance plugin due to a race condition. Crash Fix: A PGO crash in NonRealtimeAudioRenderer has been fixed. Crash Fix: If a USoundWave has the Is Ambisonics flag set to true but is not a four-channel file, this will no longer cause a crash. Bug Fix: Removed a shadow variable that could cause XAudio2 to go silent if in an extreme performance-heavy state. Bug Fix: Fixed a bug that occurred when importing audio files that caused playback of out-of-date audio or no audio play at all. Bug Fix: The sound file importer no longer prompts for a template update if reimporting a sound, and no longer logs to the display if a sound is stopped due to reimport. Bug Fix: Audio engine no longer goes silent when swapping from a null device. Bug Fix: Fixed a hang when using -deterministicaudio during PIE (Play in Editor) shutdown. Bug Fix: Fixed the SoundCue Undo feature that resulted in SoundWavePlayers being left in a state where sound wave data would stop loading in the editor. Bug Fix: Overflow is now prevented in audio output by clamping before converting from float to PCM on a master output. Bug Fix: Changed the default value of the audio underrun timeout value from 0 ms to 5 ms to be generally more defensive against platforms requesting audio at too high of a rate. Bug Fix: Added an optional config to wait for audio to be rendered before falling back to submitting an underrun buffer. Bug Fix: Fixed a race condition between USoundWave being garbage collected and the AudioMixerSourceBuffer destructor. Bug Fix: Fixed a race condition with rapid stop() then start() calls on a procedural sound. Bug Fix: Fixed an issue with the buffer read/write in audio renderer. Bug Fix: Fixed iOS issues with audio playback and iOS notifications such as a user's Alarm Clock. Bug Fix: Moved StopLowestPriority/StopLowestPriorityThenPreventNew to use culling logic instead of eviction so it would no longer require evaluation of priority via a call to the SoundBase parse. Bug Fix: Fixed an issue with modulation output curves reordering incorrectly when modifying output curve values in the editor. Bug Fix: The output curve is now hidden when the respective sound control modulation patch is bypassed. Bug Fix: Various bug fixes were made for Time Synth. Bug Fix: Incorporated Microsoft's replacement DLL files for XAudio2_9 to fix missing audio on Win8 when running in Win7 backward-compatibility mode. Bug Fix: Memory usage of the VoIP system has been greatly optimized. Bug Fix: Fixed a minor leak that would occur when multiple audio components tried to play the same USoundWaveProcedural. Bug Fix: Fixed an issue in cook for some short assets when stream caching was enabled for iOS/Switch. Bug Fix: Changes to the Compression Quality slider while stream caching is turned on no longer leads to aberrant playback behavior. Bug Fix: Fixed an issue that would occur in priority sorting if an audio source was set to bAlwaysPlay and had an overall volume greater than 1.0f. Bug Fix: Various issues that would occur if a server were cooked with stream caching enabled have been fixed. Bug Fix: Made a fix for an audio hang at exit, such as during a PGO build, when using the NonRealtimeAudioRenderer. Bug Fix: Fixed a regression issue in the SoloSound console command. Bug Fix: There was an issue where the Preselect At Level Load field on the Random sound node would sometimes not actually cull child nodes. This has been fixed. Bug Fix: Sounds failing to cull on priority by the concurrency system has been fixed. Bug Fix: Fixed a rare COM initialization failure. New: The dynamic volume mix feature is now available for Sound Classes. Bug Fix: Fixed the stub issue for source descriptions in stat sounds when running the audio mixer. New: Added color options for the debug body text for audio debugging. New: Added a CVAR, au.DisableAppVolume, for debugging in IDE while the app is backgrounded. Bug Fix: For Natural Sound attenuation falloff, when dBAttenuationAtMax is set to a value larger than the default minimum of -60 dB, it no longer causes looping sounds to virtualize even though volume doesn't go to 0. New: Added the ability to specify whether a sound should continue to attenuate, go silent, or hold the maximum value when beyond the bounds of the attenuation shape. New: Added a bool return to the message payload array requests in the OSC API to determine whether it failed, or if the value is fresh. New: Added Blueprint functions to convert the Object path to and from the OSCAddress. New: Added the ability to remove OSC containers at index. New: Fixed pattern matching for wildcarding over a span of containers/methods. New: Added serialization for .ini files to control audio modulation mixes, with the audio modulation plugin to set game mixes without recooking the game. New: Mixes can now also activate any bus they act on if the bus is not already active. New: Integrated an update to SteamAudio, which brings dynamic geometry support and a number of bug fixes. New: Added the CVARs osc.servers, osc.server.connect, osc.server.connectById, osc.clients, osc.client.connect, and osc.client.connectById to manipulate and test OSC client/server connections. New: Added submix assetization for all master submixes that gives users better control over the master submixes and master submix effects. New: Added the ability for a user to set up distance-based sends to an array of submixes defined in attenuation settings. New: Added the ability to modify submix effect chains from Blueprint (add, remove, or replace). New: The Synthesis plugin is now enabled by default for new projects. New: Added soundfield submixes, endpoint submixes, and soundfield endpoint submixes to the engine, as well as extensible interfaces for soundfield formats and audio endpoints that are external to the audio engine. New: Added the ability to attenuate sound priority as a function of distance in attenuation settings. New: FAudioDevice lifespans have now been fully decoupled from the UWorld, meaning that you can arbitrarily spawn and use instances of the audio engine in code. This is useful for building profilers, or for rendering audio for arbitrary systems. \ Example usage: \ FAudioDeviceParams DeviceParams; \ DeviceParams.Scope = EAudioDeviceScope::Unique; \ MyAudioDeviceToRecord = AudioDeviceManager->RequestAudioDevice(DeviceParams); New: Added a noise gate to outgoing VoIP audio that can be controlled with the CVAR voice.MicNoiseGateThreshold. This is useful because it can be controlled separately from silence detection. New: Added voice.debug.PrintAmplitude to print the current outgoing loudness of the microphone. New: Made minor CPU optimizations to the VoIP system. New: Added voice.playback.ResyncThreshold, which can be used to target a given audio latency in order to better sync incoming VoIP with other systems. New: Added support for stereo VoIP input/output. New: The user can now scale distance attenuation ranges of attenuation settings via Sound Class properties. New: Added an option to override the default chunk size for streaming audio on most platforms. New: Added the console command au.streamcaching.FlushAudioCache to flush the audio stream cache. Various updates to the stat audio streaming debug view were also made. New: Added the ability to safely resize the stream cache at runtime using the au.streamcaching.ResizeAudioCacheTo console execute. New: For clarity, the Oculus Audio plugin now prefixes its logs with the words "Oculus Audio". New: Added voice.MicInputGain and voice.MicStereoBias CVARs to control the loudness and panning of outgoing VoIP audio. New: By default, VoIP will now be prioritized over all other audio sources. If you want VoIP sources to have the same priority as other audio sources in the engine, set au.voip.AlwaysPlayVoiceComponent=0. New: Added the ability to patch both outgoing microphone audio and incoming VoIP audio out of the VoIP engine using FVoiceInterface::GetMicrophoneOutput() and FVoiceInterface::GetRemoteTalkerOutput(). New: Added missing Solo/Mute icons. New: Moved the AudioCaptureModuleName spec to an .ini setting, and updated all relevant .ini files. New: Added a sort-by-priority feature for stat sounds; only shows pertinent information when sorting by a given field. Improvement: You can now disable the full path for a debug. Bug Fix: The MaxDistance calculation is now correct for box attenuation. Improvement: There is now a debug draw CVAR (au.3dVisualize.Listeners 1) to help debug if a listener is in or out of bounds for third person. Previously it was off by default with au.3dVisualize.Enabled 1). Improvement: Upgraded XAudio2_7 to XAudio2_9 to improve audio device-swapping support on a PC, and for general stability improvements. Improvement: Improved XAudio2Device logging; reduced log spam by adding a log-once behavior. Deprecated: Moved fast reverb to default and removed the fast reverb asset. New: Updated debug exec console commands to have new "au" prefix instead of the now deprecated "Audio" prefix. Deprecated: The "Audio" prefix has been replaced by the new "au" prefix. Automation Bug Fix: Fixed Gauntlet support for 32-bit Android builds. 32-bit builds are considered but discarded if 64-bit versions are also present. Bug Fix: Fixed issue where new/changed screenshots could not be approved in the editor on some platforms (Mac/Xbox/Switch). New: All test methods now return a boolean to indicate if the test passed or fail. Implemented new macros run those methods and, if they return false, insert a "return false" in the code, so that it exits the current test immediately. This lets programmers write automated tests much faster. See comments and example in Core/Public/Misc/AutomationTest.h Improvement: EngineTest runtime caps the number of threads used in System.Core.Misc.LockFree to reduce the run time on machines with many processors. Blueprints Bug Fix: Reused the default scene root node when creating a Blueprint from an actor that has a root node named the same as default scene root (most likely an "empty actor") Bug Fix: Removed a legacy "always compile on save" code path for LSBPs. Bug Fix: Fixed an editor crash caused by leaking Blueprint context menu actions for new sublevel script actors. Bug Fix: Removed an inconsistent tool tip from the interface list on BP editor. Bug Fix: Now allowing an interface function graph to be converted in the case where the interface function has changed and should now be placed as an event. Bug Fix: Fixed duplicated default scene root when creating a new Blueprint subclass from an actor in the level. Bug Fix: Deferred the serialization cost of unloaded Blueprint asset search data versioning info from editor startup. Bug Fix: Fixed improper context menu options. Blueprints now conform to an interface when it refreshes to ensure you cannot make graphs with the same name as an interface. Bug Fix: We now clear any existing reference to a class object on a new connection to the 'Class' input pin for the 'GetClassDefaults' node. Bug Fix: Fixed the nativized Blueprint class code output debugging tool to include interface types. Bug Fix: Reduced the overall time spent constructing search result nodes during a global Blueprint search. This should significantly improve the overall Find-In-Blueprint search speed in large-scale projects with lots of content. Bug Fix: Optimized Blueprint compilation dependency checking when large numbers of Blueprints are loaded. Bug Fix: Fixed a Blueprint compilation error that would result when a custom event was bound to a server-only delegate. Bug Fix: Changed Blueprint editor tab history to work across the entire Blueprint. Bug Fix: Fixed a typo in code that's emitted for properties that reference explicitly-mapped 'noexport' struct fields in nativized Blueprint code. Bug Fix: Fixed an invalid cast in nativized Blueprint code for converted user-defined enum fields. Bug Fix: Fixed incorrect target platform names for some client/server configs that led to an ensure at cook time with Blueprint nativization enabled. Bug Fix: Updated menu item labels for the native AttachToActor/AttachToComponent APIs to better tell them apart in the Blueprint editor context menu. Bug Fix: Removed the clearing of keyboard focus when showing the details panel because it caused errors when undoing while a property edit still had keyboard focus. Clearing the keyboard focus during the process of the undo would cause another transaction to start, breaking the current undo transaction. Bug Fix: Fixed an incorrect platform name being used in manifest lookup for some cook targets during Blueprint nativization. Bug Fix: Fixed blind reinstancing of natively-constructed nested subobjects during class-owned sub-object initialization in nativized Blueprint C++ code. Bug Fix: UCLASS metadata now supports a 'CustomThunkTemplates' class so that CustomThunk nodes can be nativized. Bug Fix: Updates force deleting to also replace references on Actors with no owning World (previously these actors were just ignored). This ensures in the correct results of objects created in actor components such as Child Actor Component and in Sequencer. Bug Fix: Updated Blueprint graph to cancel ZoomToFit on nodes when panning with the middle mouse button. Bug Fix: Self pins are now properly automatically converted to soft object references when required. Bug Fix: Viewing a Blueprint actor in the Blueprint Editor Viewport no longer changes a custom thumbnail zoom level when the zoom level is really close to the actor. Bug Fix: Fixed asset registry discovery slowdown caused by logic that was trying to build up a cache of component type information. Bug Fix: Fixed bug that would cause Blueprint default values to be lost after a Blueprint member variable was renamed Bug Fix: CompileAllBlueprints commandlet now re-instances, ensuring that it does not later crash during garbage collection. Bug Fix: Fixed deterministic cooking issue with multicast delegates on components bound in actor event graph. Bug Fix: Fixed issue where Blueprint compilation errors trigger a modal dialog in unattended mode. Bug Fix: Fixed bug that would cause references in UI code to be updated to point at the child Blueprint after compiling the child Blueprint (or Blueprint that resulted from duplication.) Bug Fix: BlueprintDisplayName and BlueprintDescription now get cleared on the new copy when duplicating a Blueprint asset. Bug Fix: Prevented deprecated async action factory methods from showing up in the BP context menu (unless the show deprecated functions setting is enabled.) Bug Fix: Toggling the 'Expose Deprecated Functions' setting now takes effect immediately rather than requiring an editor restart. Bug Fix: Fixed native struct-typed variables on spawned Blueprint instances not being initialized to the modified default value of any non-reflected field of the struct exposed for edit through a details customization. Bug Fix: When checking for zombie members in a user defined struct, now ensure that container types get checked as well. Bug Fix: Using "Assign selected Actor" in the context menu of an event node in a level Blueprint to assign an event to an unrelated type of actor will no longer cause a crash. Bug Fix: Fixed bug where subobjects of Blueprint classes that were async loaded in the editor would stick around forever as phantom objects, which could cause crashes or corruption when changing maps. Bug Fix: Ensured that the RF_DefaultSubobject flag now persists when copying an object in the level editor. Bug Fix: Fixed the cause of NewClass != OldClass assertion failure when async loading Blueprints in the editor. Bug Fix: Fixed a crash in the Blueprint debugger when hovering over container types inside a function call. Bug Fix: Fixed non-functional collision in a nativized Actor Blueprint containing one or more components set up to use a custom collision profile. Bug Fix: Fixed a bug introduced in 4.24 where soft object and class references on Blueprint pins were not being properly tracked for cooking/reference viewer. Any assets that were saved in a 4.24 version of the engine and are missing references can be resaved to fix this. Bug Fix: Changed duration of a notification toast in the Blueprint Editor to improve visibility. Bug Fix: Added compilation validation for pin connections on tunnel nodes, allowing the proper display of error messages when changing input pin types on macro graphs.This additional validation may point out errors that were previously undiscovered in Blueprints. Bug Fix: Fixed graph pin default values being editable even when set to read-only/ Bug Fix: Fixed issue introduced in 4.24 where breakpoints or navigation could cause multiple tabs to be open at once for the same function. Bug Fix: Updated Unreal Header Tool to properly support int64 variables that are marked with with ExposeOnSpawn metadata. Bug Fix: UBlueprintGeneratedClass::GetAuthoritativeClass() returns itself as the authoritative class when it's cooked and therefore has no source UBlueprint. New: Made the "Select Function" drop down searchable on Create event nodes. New: Added Blueprint support for FIntPoint. New: Added conversions from int64 to int32 and to uint8 (byte). New: Decoupled Blueprint indexing from the search thread in order to take advantage of multiprocessing, and improved the overall Find-In-Blueprints experience while indexing. New: Added GetPlatformUserDir to Kismet function library. New: Added additional type support(bool, string, name, and Object) for the "Format Text" node. New: Added an IsEmpty function to String types in the KismetStringLibrary. New: Now allowing TArray, TSet and TMap elements to be visible in the SCS tree if the container is marked as Visible/Editable. New: Users now have the options to display access specifiers for functions and variables in the "My Blueprint" window. This feature can be enabled by selection the view options for the "My Blueprint" and selecting "Show access specifiers in My Blueprint view" New: Exposed Runtime Virtual Texture assets to Blueprint. New: Optimized component instance data determination and application when rerunning construction scripts. This can significantly speed up rerunning the construction script when using AddComponent nodes in the User Construction Script. New: Implemented variadic function support for Blueprints Variadic functions are required to be a CustomThunk marked with the "Variadic" meta-data. They can then be used from a custom Blueprint node to accept arbitrary arguments at the end of their parameter list (any extra pins added to the node that aren't part of the main function definition will become the variadic payload). Variadic arguments aren't type checked, so you need other function input to tell you how many to expect, and for a nativizied function, also what type of arguments you're dealing with. New: Exposed TransformVector4 as a static function to Blueprint. This will allow users to transform vectors by 4x4 matrix in BP. New: Added a BlueprintFileUtils plugin to expose file system operations to Blueprint. New: The output array of GetAllActorsOfClassWithTag will now use the type of the input class if a class literal is used. New: Now allowing container pins in "make" nodes and structs to be empty by default. This will auto generate the default value if none is provided. This makes the experience of using custom UStructs more fluid and prevents there being a node with a compiler error by default. New: Added horizontal view mode to the Blueprint diff tool. API Change: Removed unused variable UBlueprint::SearchGuid. Core Crash Fix: Fixed a crash when a sparse multicast delegate gets destroyed mid-execution. Crash Fix: Fixed UAnimCurveCompressionCodec serialization crash when loading cooked content in the editor. Crash Fix: Fixed a server crash when an asset file exists but can't be read. Bug Fix: Fixed GDB pretty printers for FString, FName, FMinimalName, FNameEntry and TWeakObjectPtr. Bug Fix: FString::Mid now gracefully handles negative start and negative count inputs. \ FString::RemoveAt can no longer leave an unterminated string. Bug Fix: Ensure that unsaved and unused Blueprint classes are removed from the class hierarchy cache when deleted. Bug Fix: Added the name of failing struct to the bad alignment assert message in UScriptStruct::InitializeStruct. Bug Fix: Fixed FName number parsing from unterminated strings—copy the number to a temporary buffer until we have an overload of Atoi64 that takes the input length. The generic implementation of Atoi64 for TCHAR converts the input to ASCII then converts that to an integer. For unterminated strings, this could cause an access violation or convert an obscenely large input to ASCII. Bug Fix: Fixed a race condition in FOutputDeviceRedirector that was corrupting logs. Calling EmptyBufferedLines() before serializing the buffered lines was allowing their memory (owned by FLogAllocator) to be reused by buffered lines from other threads before the main thread had consumed them. Bug Fix: Added support for passing "abscrashreportclientlog" and "nullrhi" command line arguments to crash reporter monitor. Bug Fix: Fixed ensure messages disappearing in editor log. Calls to PanicFlushLog should only happen on crashes, and not ensures. For ensures on any thread other than the main thread the editor log, which is not thread safe, the editor will lose these messages. On crashes we don't care about, like on non thread-safe output devices since we will not recover, we are only interested in the log file. Bug Fix: Fixed incorrect sandboxed paths. When the argument is an already sandboxed path, ConvertToSandboxPath was adding another copy of the path to the end. We need to test if the path is inside the full sandboxed path before testing if it's inside the project directory (since the former is inside the latter). Bug Fix: Fixed incorrect callstack in crash reports. When a ensure was reported before a crash, the callstack from the crash was added to the ensure stack frames. This is because the debug helper instance used to analyze the minidump is a singleton and not designed to be reused. Use the GetNew method to create a new instance of the helper, and delete the old one. Bug Fix: Fixed TIndirectArray::operator=(TIndirectArray&&) The default version of the move assignment operator fails to free the pointed at data in the existing array before assigning, and thus the current contents were all leaked. Bug Fix: Implemented a faster version of FRandomStream::GetFraction that doesn't require a type conversion to integer and back to float. This fix affects any use of FRandomStream::GetFraction, which will now return a different value for the same seed. Bug Fix: VS2019 v16.5.0 compilation fixes. Bug Fix: Changed logic in FArchiveFileReaderGeneric::InternalPrecache to avoid buffer shrinking. Previously, the buffer could potentially be shrunk to a very small size if you read data at the end of a file before seeking to the beginning and continuing to read other data. Bug Fix: Fixed VS2019 compilation issues with int32 to bool implicit conversions. Bug Fix: The engine will now skip Collecting Garbage during initial load because it's not safe since garbage collection token streams may not be assembled yet. Bug Fix: When cooking by the book, the engine will now check if the specified target platforms are editor platforms and reject them if so. This is to prevent common mistakes when manually entering cook command lines that result in crashes when cooking. Bug Fix: Fixed TFastReferenceCollector not setting the serializing object when processing TMap and TSet references that resulted in issues with finding all references. Bug Fix: Changed how the engine detects trashed user structs—instead of looking at field lists, it's now going to be using dedicated functions. Bug Fix: Fixed FAsyncPurge::TickGameThreadObjects not respecting Garbage Collector time limit that could lead to bigger hitches on the game thread (regression). Bug Fix: Reduced stack size usage by LogTrace in debug builds. Bug Fix: Change the cached World reference in ULandscapeInfoMap to a TWeakObjectPtr to safeguard against order or destruction issues on World unload. Bug Fix: Fixed incorrect UObject archetype caching when the object's Outer is still pending load. Bug Fix: Fixed a crash when accessing TSubclassOf default object when its class is null. Bug Fix: Fixed Tickable objects sometimes failing to register when re-allocated with the same address as previously destroyed ones. Bug Fix: Fixed application hanging in MapProperty serialization. Bug Fix: FORCEINLINE functions that were made virtual in FAsyncLoadingThread. New: Removed _VTABLE define in favor of fvisibility-ms-compat. This will hide symbols but keep type visibility as default (for example, exported) over needing to explicitly mark the types as exported. New: TResizableCircularQueue now supports non-trivial types and default constructed pod types. New: Added an implementation of Robin Hood hash table optimized for small keys and values. New: Added float/double comparison function (FMath::IsNearlyEqualByULP) that compares floats according to the number of units-in-last-place differences (rather than using an absolute tolerance value). New: Added double version of FMath::IsNaN and FMath::IsFinite. New: AddAnnotation function (in various annotation classes) now consistently supports moving incoming annotation data. New: FString::Left, LeftChop, Right, RightChop, Mid, ConvertTabsToSpaces, and TrimQuotes now have inline versions. New: FString:: Left, LeftChop, Right, RightChop, and Mid now have rvalue versions. Added rvalue overloads for Left, LeftChop, Right, RightChop, and Mind. Added inline version of TrimQuotes. New: FPaths::ProjectSavedDir now returns const FString&. New: Added the ability to stop a demo after a CSV profile completes with -csvDemoStopAfterProfile. New: Added a CSV_PROFILER_USE_CUSTOM_FRAME_TIMINGS preprocessor macro to enable a game to call FCsvProfiler::BeginFrame and EndFrame instead of using the engine defaults. New: Added new API for querying extra development memory. In the CSVProfiler, this gets reported as metadata and by default, it subtracts the dev memory from the free memory stat. This is controllable with the csv.AdjustPhysicalFreeForExtraDevMemory CVar. New: Added FPathViews as an analog for FPaths that use FStringView and FStringBuilderBase. This includes implementations and tests for: GetCleanFilename GetBaseFilename GetBaseFilenameWithPath GetPath GetExtension GetPathLeaf Split Append New: Optimized FFileHelper::BufferToString to eliminate its use of a temporary allocation. Converting UTF-8 inputs was using a temporary TCHAR buffer the same size as the output. The conversion is now done directly into the allocation for the target string. New: Optimized engine startup times by creating and polling fewer streamable handles. This targets AssetManager::ChangeBundleStateForPrimaryAssets(). New: For CsvToSVG 3.31, reduced event spam by grouping duplicate events if they appear within a certain number of pixels of each other, including: Displaying the count Reducing the frequency of event lines Reduce the alpha when spamming occurs New: Implemented an experimental Robin Hood hash table container for performance-critical code. New: Attempting to read past the end of a file on Windows will once again return false rather than being silently truncated. All platforms should now treat this as an error. New: Running with -LLM enabled now handles more memory allocations than before. New: Fixed a potential (but rare) deadlock when attempting to load pak file data from multiple threads at once. New: Added warning to FBulkDataBase::GetCopy to highlight cases where it would be faster to let the method allocate the buffer rather than allocating one up front. New: Added and used StringFwd.h for string builder and string view declarations. Including StringFwd.h in place of forward declarations offers more flexibility to change how these types are declared. New: Cleaned up FQueuedThreadPool interface to make it easier to plug in alternative implementations. New: Backed out pull request (submitted as CL10610795 ) for TInterval::ToString() because it interacts poorly with some as-yet-unidentified change which was robo-merged later from another stream. New: VS2019 compilation fix for #include because VS2019 only supports #include as per the standard. New: Added TryMalloc(), which may return a nullptr if the memory allocation request cannot be fulfilled. New: Made refcount validation conditional (only in Debug or when slow checks are enabled). New: Added mimalloc allocator from because it performs better than mimalloc in cooker benchmarks. We still default to TBB malloc until the behavior has been explored in more depth. To opt in to mimalloc, pass -mimalloc on the command line. New: Fixed -nothreading logic not working properly due to one-time initialization of flag before the command line has been set. New: Added SCOPE_MS_ACCUMULATOR (used now to track load times via on-screen stats, but can be used generally). New: Changed the AsyncPreload stuff to be opt-in by platform. New: Garbage Collector will now only run clustering code if clusters are enabled and when actual clusters are allocated. New: When doing incremental unhashing, Garbage Collection will no longer log each iteration to reduce log output. New: Fixed an issue with the resave commandlet not being able to perform a P4V commit after the UAsset was resaved. New: Added event tag suppression to OutputDeviceMemory. Fixed ring buffer writing when bounds crossed. New: Cleaned up Garbage Collector Token Stream generation code and merged token debug info with the token stream to reduce data duplication and simplify the API. New: Added -skipiostore UAT option to help with batch files that specify -iostore. New: Added FReferencerFinder helper class that finds all objects referencing the provided list of objects. This is similar to FArchiveHasReferences but up to 16x faster. New: Disabled filename extension exclusions when using pak files in an editor build. New: Move module manager extra search path configuration out of FModuleManager::Get() and into a seperate function which is called from the launch engine loop after the platform files are created. This ensures that editor builds that use pak files have the paks mounted before we start scanning for data driven platform *.ini files. New: Pak platform file changes to support mounting pak files in the editor. Always look for pak files in the standard locations to determine whether we should create the platform layer. When looking up a decryption key, check the registered list for all guids, even empty ones. We want to support pak mounting in non-monolithic builds where we don't have an embedded key. Remove the initialization-time check that the decryption key exists for pak files with an encrypted index. The condition to test is more complex when considering editor pak mounting, and we will get a meaningful error almost immediately afterwards anyway. New: Don't try and load plugin manifests in editor builds New: Optimized GetArchetypeFromRequiredInfoImpl by reusing cached archetypes New: Inlined IsGarbageCollectionWaiting so that it doesn't appear in profiles New: Kismet reinstancing code will now use FReferenceFinder instead of FArchiveHasReferences to improve reinstancing performance New: If Garbage Collector Clustering is disabled, Garbage Collector will now run with code responsible for handling clusters compiled out to avoid unnecessary checks. Actor clustering will now be disabled if clustering is disabled in general Added a separate setting for clustering generic assets (gc.AssetClustreringEnabled) to be able to enable actor clustering independently from asset clustering as it was possible in the past New: Included the global UObject array memory usage in 'obj overhead' command report New: Added 'obj overhead [-detailed]' console command to print out the total memory overhead of UObject hash tables and maps New: FORCEINLINE IsCollectingGarbage to fix performance hotspot. Improvement: Extended FStringView to make it easier to use as a drop-in replacement for FString where appropriate The family of functions that return a slice of the view has been mirrored from FString, including the variants of Left, Mid, Right, and Trim. Like FString, Equals and Compare are now available and case-sensitive by default. View literals are now available, such as TEXT("Value"_SV). A string view is now treated as a contiguous container of characters which makes it compatible with many algorithms. Improvement: Fixed up some code for C++17 compilation ( std::binary_function, std::unary_function and register keyword are all deprecated). Improvement: Fixed missing vtable for FActorTickFunction. Improvement: Garbage Collector optimization where class reference will now be completely ignored by the Garbage Collector for instances of native classes. Additionally, the Outer reference will not be processed for Packages. \ Deprecated: Deprecated fixed string builders for 4.25. Fixed string builders are difficult to use correctly in many cases, and when used correctly, offer no performance or memory improvement over an extendable string builder of the same size. Deprecated: Removed FGenericPlatformMisc::HandleIOFailure since it's no longer in use. Deprecated: Removed rollup support from DDC interface since it is no longer useful. Deprecated: Added deprecation warnings when using the old (non-EDL) loading path in cooked builds and when cooking. Toggling EDL off in Project Settings will also result in a deprecation pop up in the editor. Deprecation warnings can be disabled by adding the following lines to DefaultEngine.ini in the project's config folder: [/Script/Engine.StreamingSettings] s.DisableEDLDeprecationWarnings=True Removed: Removed deprecated TWeakObjectPtr operator=. Removed: Removed FAsyncUncompress. Removed: Deleted deprecated functions from UnrealString.h Removed: Removed FPaths functions marked as deprecated for 4.18. Removed: Removed deprecated Histogram functionality. Removed: Removed (miscellaneous) deprecated functions from: App.h AssertionMacros.h MonitoredProcess.h Serialization/CustomVersion.h Removed: Removed deprecated functions from Compression(.cpp/.h). Removed: Removed deprecated functions from OutputDevice(.cpp/.h) Removed: Removed redundant UPackage::AddReferencedObjects function. Cooker Bug Fix: Fixed a crash in the cooker difference check caused by FArchiveProxy not forwarding on calls to SetSerializedProperty() and SetSerializedPropertyChain(). Bug Fix: Fix to make sure SandboxDirectory is always absolute. Bug Fix: Fix to handle an invalid platform passed to the cook commandlet without a fatal assert. Bug Fix: Fixed extra DDC puts when DDC logging is enabled on the command line. Before the fix, enabling verbose DDC logging with -ini:Engine:[Core.Log]:LogDerivedDataCache=Verbose was causing extra idempotent puts because of checks for the DerivedDataCache commandlet that checked for "DerivedDataCache" anywhere in the command line. Check for "Run=DerivedDataCache" avoids the problem. Bug Fix: Fixed path prefix comparison when cooking with ErrorOnEngineContentUse. This is a fix for not being able to cook plugins content. Bug Fix: Fixed iterative cooking on Windows when the project is outside of the root directory but on the same drive. Bug Fix: Fixed the following cook performance and correctness issues in UPackage::Save. Cook Performance: Avoid performance cost of calling CreateTempFilename() for cases where we don't actually need a temp filename (cost involves a FileSize check with the file system). Avoid the performance cost of calling FileSize() to get the saved size of packages and use the recorded buffer size instead (which gives uncompressed sizes). Correctness: When saving packages asynchronously, the UPackage::FileSize field was populated with the filesystem size of the package, but since that may not have been written yet by the async writer, it was often set to (uint64)-1. When computing the MBWritten cook stat, make sure that bulk data is included in the accumulated total and that the stat isn't calculated incorrectly in DiffOnly modes. Temporary files would be left lying around if the package was determined to have no exports or to have been completely nativized. Bug Fix: If chunk size is too small, abort generation of streaming install manifest. Bug Fix: To fix log spam (of about 1.8 GB in cook logs), changed ShaderPipelineCacheToolsCommandlet's log to verbose. New: Moved MD5 computation on writer thread during package save. New: Improved cook times by avoiding re-computing the static parameter values twice during calls to FMaterial::CacheShaders (once in FMaterialResource::GetShaderMapId and again in FMaterialResource::GetStaticParameterSet) by enabling the MaterialInstance to store a cached parameter value set and re-use it in selected scopes (only applied to FMaterial::CacheShaders for now, but may be applicable elsewhere). Testing notes include: QAGame produced the same WindowsNoEditor cooked data (barring metadata that had some non-determinism issues) with/without this change. InfiltratorDemo looked visually correct on Win64 with and without cooked data. New: Added SessionPlatforms to record the platforms being cooked in the current CookByTheBookSession or CookOnTheFly request. This fixes the performance issue of unnecessarily generating AssetRegistries for unused platforms when cooking in the editor. New: Optimized IsEventDrivenLoaderEnabledInCookedBuilds for commandlets like the cooker. New: Reduced iterations on Material Functions in UMaterialInstance::GetStaticParameterValues by changing the underlying API to do lookups for multiple parameters at the same time. Tested by a full cook with debug code that checked (using check()) whether the parameter values and expression IDs output from the old code path was equal to the parameter values and expression IDs output from the new code path. New: To save time when the sandbox directory has many files to delete, delete the Sandbox directory asynchronously while the rest of the cook goes ahead. New: Reduced cost of GetDependentFunctions on UMaterialFunction. Changed the following: Introduce IterateDependentFunctions as a method to reduce temporary allocations when computing dependent functions. Add transient UMaterialFunction::DependentFunctionExpressionCandidates field (computed in PostLoad and PostEditChangeProperty) with only function expression types that we might have dependent functions on to reduce the amount of time we spend iterating on FunctionExpressions and casting them to see if they're a type we could have to get dependent functions from. New: Reduced cost of FMaterialUpdateContext and RecacheMaterialInstanceUniformExpressions to save 4 minutes off of a large dataset cook time. Changed the following: FMaterialUpdateContext doesn't create a FGlobalComponentRecreateRenderStateContext instance unless FApp::CanEverRender is true. FGlobalComponentRecreateRenderStateContext stores FComponentRecreateRenderStateContext instances in a TArray instead of a TIndirectArray to avoid excessive numbers of individual allocations and frees. FGlobalComponentRecreateRenderStateContext doesn't create FComponentRecreateRenderStateContext instances for components that aren't registered or those that don't have a render state created. Analysis of the code shows that constructing those component contexts is pointlessly hitting the allocator (or bloating the TArray) because they will not do any useful work in their destructor. RecacheMaterialInstanceUniformExpressions doesn't incur the cost of a TObjectIterator if FApp::CanEverRender() is false, as static analysis of the code shows no useful work done within the iteration if FApp::CanEverRender() returns false. New: Optimize significant callers of FName::ToString to save about 30 seconds when cooking a large title New: Add debug context to DDC requests that take the key as a parameter The existing GetSynchronous, GetAsynchronous, and Put functions are deprecated in favor of the DebugContext overloads. New: Optimized FindTargetPlatform and its most expensive caller during cooking to save 42 seconds of wall time when cooking a large project. New: Optimized ShaderPipelineCacheToolsCommandlet to reduce the execution time of the build command by 90 percent on a large title—time savings affect full, iterative, and single-package cooks. New: To help with profiling, added named events to package saving. New: Optimized IsEventDrivenLoaderEnabledInCookedBuilds() to not query the config system for the current state, but just use the global cvar variable that the config system writes to. New: In the Cooker, do not traverse primitive components to update the material (resulting in a minor speed improvement). Improvement: Major update which makes -DiffOnly cooks 20x faster to help find non-deterministic cooking issues. Testing on a large title showed a 99.6 percent reduction in the cost of FindAssetInPackage and a 95 percent reduction in the overall -DiffOnly cook time. Improvement: Made the following API change: A type that derives from FDerivedDataCacheInterface will need to change to the new API for GetSynchronous, GetAsynchronous, and Put that have a DebugContext parameter. Improvement: Improved DDC error when no backends are available. CsvCollate Bug Fix: Fixed an out-by-one bug with -avg for CsvCollate 1.31. The stats were being divided by csvCount-1 instead of csvCount. New: Added CsvCollate support for search patterns via -searchpattern . Fixed out-by-one error with logging Added -recurse argument to make recursion optional Listed CSV files to log New: Added CsvCollate 1.32- metadata filtering support. -metadatafilter "key0=value0,key1=value1,..." Memory Profiler Crash Fix: Fixed crash if UEngine::AddOnScreenDebugMessage was called in threads other than the Game thread. This fix secured the usage of UEngine::PriorityScreenMessages and UEngine::ScreenMessages. Bug Fix: Fixed a bug occurring in scenarios where reporting ensures to the Crash Report Client would overrun a fixed size string buffer. This issue appeared after two ensures fired, since the dynamic string buffers lifetime was equal to the crash reporting thread, and only reset on initialization. Bug Fix: Added back stats instrumentation of local Blueprints function calls. Bug Fix: Fixed FName batch loading check (OldUsedSlots == UsedSlots) in EngineTest. Bug Fix: Prevented use of Hidden enum entries as default values for UHT exposed functions. These enum entries aren't exposed to Blueprints and Python, so they were causing UI issues or script syntax errors. Bug Fix: Crash report client doesn't need full access handle to runtime when monitoring. This behavior triggered anti-cheat warnings in some cases. Use a limited access flags instead. Bug Fix: Made StaticFindObject handle "any package" search with specific package name supplied in object path, fixing the ambiguous search warning. Bug Fix: Fixed issues with plugin stack traces not resolving correctly. Disabled on demand symbol loading due to an issue where modules that are not in the main binary directory would not have symbols loaded correctly. This manifested as "UnknownFunction" entries in the logged stack trace. External crash reports were not affected. Bug Fix: Fixed unterminated number parsing for new FName constructors with fixed length. Bug Fix: Made StaticFindObject handle nested subobjects when searching for any package without a package pointer. Bug Fix: Fixed LevelStreaming when changing desired level between request and completion with a loaded level. Pending unload is never processed and the state machine stays indefinitely in the LoadedNotVisible state. Bug Fix: Fixed file helper utilities not returning failure when closing/flushing fails. Bug Fix: Fixed FMacPlatformProcess::GenerateApplicationPath to prevent searching applications anywhere on the machine (anything indexed by Spotlight) rather than just looking under the Engine directory (as specified by the function's documentation). Bug Fix: Enable TStrongObjectPtr to be created during static initialization. Bug Fix: Added static assert to TPointerIsConvertibleFromTo to stop it from being instantiated with incomplete types and giving wrong answers. Bug Fix: Fixed error message in FMatrix::ErrorEnsure(). Bug Fix: To handle Presaves that cause a circular reference that loads the package being saved again, moved ResetLoaders after Presave. Bug Fix: Fixed WriteTableAsJson to enable writing utf8 so that FillDataTableFromJSONFile and applications other than Unreal can recognize and parse the file type. Bug Fix: Changed FConfigFile::UpdateSections to preserve relative location of modified sections in the ConfigFile. Bug Fix: Fixed transient properties being written out during JSON object serialization. Bug Fix: Fixed TArray and FString assignment ignoring previously reserved memory. Bug Fix: Fixed PURE_VIRTUAL containing code with commas in it. Bug Fix: Fixed WindowsPlatformFile not handling greater-than 32bit read and write sizes correctly. Bug Fix: Improved argument type deduction for TLess and TGreater. Bug Fix: Fixed various LLM scope counters, added some missing texture scopes, and removed the explicit RHIMisc scopes from uniform and structured buffers so the parent scope is used. This makes tracking down buffer memory much easier (and more correct), as the various calling scopes will inflate rather than a single large RHIMisc scope containing all uniform and structured buffer allocations. Bug Fix: Added GetTypeHash() support to TWeakObjectPtr properties. Bug Fix: Fixed StaticFindObject() search for any package with substring package. names Bug Fix: Fixed FName construction to avoid stomping the stack for long names. Bug Fix: Fixed CString::Strspn and Strcspn. Bug Fix: Fixed value FGenericPlatformMemoryConstants::TotalPhysical, which wasn't being set on Windows. Before the fix, it was initialized to zero without being set. Bug Fix: Use UsedPhysical memory stats in LLM captures for memory allocated like on mobile devices to provide more accurate data. Bug fix: FLazyName handles literals with "_[number]" suffix and ASCII literals, which unintentionally created an eager FName and converted that to a FLazyName. New: Deprecated FCustomVersionContainer::GetRegistered, replacing it with a new thread-safe API, which includes the following changes. Common use case: The most common use case of getting a single registered version has changed: Old: CustomVersionContainer::GetRegistered().GetVersion(Guid); New: FCurrentCustomVersions::Get(Guid).GetValue(); Advanced use cases: More advanced use cases that compare multiple custom versions should use new FCurrentCustomVersions::Compare instead of looping over versions and doing lookups. New: Implemented unique Algo to remove duplicates from a range (similar to std::unique). New: Allowed TruncToFloat intrinsics used on platforms that support them. Also, added SSE and AVX defines to allow specifying availability on PC, MAC, and Linux. SSE and AVX defines have ALWAYS_HAS and MAYBE_HAS variants to distinguish between available to compile and available to run without the need to check cpuid. New: Added FName length bounds-checking on top of existing check() to guard against malicious input in shipping builds. New: Optimized unversioned array property loading. New: Added a parameter to GetActorBounds and GetComponentsBoundingBox to recursively inspect sub-actors. Also added generic ForEachComponent utility function on actor, and merged code from GetComponent/GetActorBounds/GetComponentsBoundingBox/ ... to benefit by avoiding building an array of components in place. Modifying the list of components in the functor is prohibited (like in C++ range-based iterators), otherwise, users can still use GetComponents to retrieve components in a local list and perform changes afterwards. New: Added another variant of DrawCoordinatesSystem. New: Fixed FArchiveFileReaderGeneric to always align buffer sizes to a power of two in order to take best advantage of hardware I/O. New: Implemented graceful handling of ue4stats files that don't have frames. New: Enabled 'if constexpr' support on more compiler versions. New: Added a UE_STATIC_ASSERT_COMPLETE_TYPE(Type) macro that causes a compile error if Type is incomplete. New: Added functionality for bImplicit send configuration variable when Crash Report Client runs in monitor mode. This allows a game to automatically send crash reports without user interaction, displaying a native OS dialog when completed. New: Optimized FCborStructSerializerBackend (used by MessageBus/UdpMessaging) to encode TArray/TArray as a byte stream rather than an array of numbers, approximately reducing message sizes by a factor of two. New: Added move construction between TArrays with different-width heap allocators (for example, TArray to TArray64). New: Added GetGeneratedTypeName(), which creates a non-portable static string representation of any C++ type. New: Made FPrimaryAssetId FString constructor explicit, added new faster construction paths and StringBuilder << support. New: Added rvalue support to TSharedPtr::ToSharedRef, TWeakPtr::Pin, and TSharedFromThis::AsShared. New: Removed unnecessary circular dependency load deferring work during property serialization when EDL is enabled. New: Exposed working directory for FInteractiveProcess. New: Added ProcessHandle getter in FMonitoredProcess and FInteractiveProcess. New: Added Algo::ForEach. New: Added a predicate version of LoadFileToStringArray, which filters lines based on content. New: Made the following classes final when they already had a final virtual destructor: TGraphTask FD3D12StagingBuffer FOpenGLGPUFence FOpenGLStagingBuffer FD3D11StagingBuffer New: Separated out FArchiveState from FArchive so that it can be queried from FStructuredArchive without providing access to the entirety of the underlying FArchive. New: Added PLATFORM_COMPILER_HAS_FOLD_EXPRESSIONS feature check macro. New: Added operator* for TOptional. New: Used a fixed-size inline buffer for the pimpl in FStructuredArchiveFromArchive to avoid frequent allocations. New: Added FName batch serialization and aligned up IoStore chunks to 16B. The batch serialization is used by the new IoStore and AsyncLoading2. New: Tagged property serialization optimizations. New: Added stateful deleter support to TUniquePtr. Also, fixed TUniquePtr's assignment operator which takes a TUniquePtr with a different deleter. New: Added UE_ASSUME macro that works on both MSVC and Clang. New: Fixed wasteful slack being allocated during TCHAR* -> FString construction. New: Implemented LexFromString from FStringView for primitive types. New: Optimized byte swapping to use intrinsic code provided by MSVC and Clang, resulting in up to 6x faster performance on MSVC. Also, Clang was able to optimize the generic C++ code into the intrinsic equivalent. New: Added UE::String::BytesToHex and UE::String::HexToBytes, which do not require FString as input or output. New: Added FName::TryAppendAnsiString as an optimization because most names are not wide. New: Implemented the following engine init optimizations and new string conversion paths. Added: Core string conversion paths FString::Append() and += overloads to avoid pointless temporaries when appending C strings UTF32 <-> TCHAR string conversion macros Optimized: QuotedString parsing Property parsing PackageName SoftObjectPath New: Added TEqualTo as a binary predicate to perform equality comparison of its arguments. New: Added the following: Algo::Replace can be used to replace FString::ReplaceCharInline on array views of characters such as on a view of a string builder. UE::String::ParseLines can be used to split a string view on line endings to find the non-empty lines in the view. The intent is to replace functions like FString::ParseIntoArrayLines and FFileHelper::LoadFileToStringArray. UE::String::ParseTokens[Multiple] can be used to split a string view on one or more single-character or multi-character delimiters. The intent is to replace FString::ParseIntoArray. Extend Algo::CompareByPredicate to allow the input ranges to be different types and to take them by universal reference. Extend TArrayView and MakeArrayView to allow array views to be created from initializer lists. New: Extended StringBuilder to make it easier to use in place of repeated FString concatenation. The new functionality includes: AddUninitialized Appendf Join JoinQuoted RemoveSuffix Reset Integers can now be written to string builders using operator<< New: Extended FSHAHash for string view and string builder. New: Added string view constructors and a string builder append operator to FName. Improvement: This major improvement includes unversioned property serialization disabled by default, resulting in 2x faster loads and 6x more compact compared to tagged property serialization. The most important differences to tagged properties are the following : Assumes code and data schema matches Only used in unversioned (cooked) packages Doesn't store field types nor converts values when field types change Does't handle name or GUID redirects Doesn't store memory-wise zero properties It's based on property indices instead of property names/tags The metadata is placed in a compact header instead of tags/metadata being interleaved with data Improvement: Optimized iteration-heavy StreamableManager functions. Improvement: Optimized FPackageName::GetShortFName. Improvement: Optimized FPackageName::TryConvertFilenameToLongPackageName to save 13 seconds of wall time when cooking a large project. Improvement: Added initial capacity for container allocators so containers can start with inline allocator capacity. Speeds up TBitArray, TMap and TSet construction and initial insertion. Natvis Bug Fix: Fixed TArrayView Natvis sometimes not working correctly. New: Changed FNameEntryId visualizer to always display all names without any quotation marks. New: Added FStatNameAndInfo visualizer, using it to visualize FStatMessage in non-debug builds. PerfReportTool Bug Fix: Fixed PerfReportTool issue when adding numeric and non-numeric metadata to summary metadata columns. When this happens, we just convert the column to strings instead of throwing an exception. Bug Fix: Fixed collated/email display when there are missing values. Instead of displaying nothing for that entry, compute min/max/avg only for values that exist. Added support for map overlay summaries Added support for limiting decimal places for CSV graphs Reduced report size by limiting decimal places for CsvToSVG graphs. New: Added boundedstatvalues summary type for visualizing particular stats between two events, with configurable columns. Each column displays a stat value, computed with one of the following formulas: sum, percentoverthreshold, percentunderthreshold, average and streamingstressmetric (the latter is used for File I/O) Added support for shared summaries (set up once and use in multiple reports) Added snapToPeaks property for graphs, so that it can be disabled in some cases (for example, with smooth graphs). New: Added support batched graph generation for PerfReportTool. This update increases performance by 30 percent over the old multi process method, and significantly lowers CPU and disk usage. Fixed determinism issues with report generation (Test cases are now identical between runs) Enable with -batchedgraphs Added batch and multi thread support via response files for CsvToSVG New: For PerfReportTool 4.01, batched graph generation is enabled by default. Disable with -nobatchedgraphs Report generation is 33 percent faster and no longer consumes the entire CPU Unreal Insights Channels New: New concept "channels", which categorizes events into named groups. This allows users to manage the amount of trace data generated. Channels are disabled by default, and can now be toggled during live sessions. New: "Developer/TraceDataFilters" plugin contains UI for controlling channels in Unreal Insights and Gameplay Insights. Improvement: Unified command line argument to control events, "-trace=channel1,channel2,...". This replaces the previous arguments. For example "-trace=cpu,frame,bookmark" enables cpu profiler events, frame markers and bookmarks. Session Info New: New "Session Info" tab showing general info for current trace session being analyzed. New: Available info: session name, URI, analysis status, session duration, analysis modules. Store Browser Start Page New: Added a splash screen to appear each time session analysis is started in a separate process. New: Added "Auto-start" toggle option to allow session analysis to automatically start for new live sessions. It is also possible to set a filter by Platform or by Application Name (for live trace sessions allowed to auto-start). Improvement: Changed Unreal Insights workflow to start in "Browser mode" (the old "Start Page"). Trace session analysis is now started in a separate process ("Viewer mode"), for each session being analyzed. A single Browser window can be open at a time. Improvement: Improved tooltip for traces in the Trace Sessions list. Improvement: Renamed the "..." button to "Explore" (Explore the Trace Store Directory). Removed: Removed Start/Stop recorder functionality. Asset Loading Insights Crash Fix: Fixed crash when selecting Object Type Aggregation Tab on second open of a trace session. Misc Bug Fix: Fixed Timing Insights to have the layout persistent. New: Added "-OpenTraceFile=file.utrace" and "-OpenTraceId=id" command line parameters to force UnrealInsights to start analysis of the specified trace file or id. In this case Unreal Insights starts directly in "Viewer mode"). New: Added "-Store=address:port" command line (also as "-StoreHost=address -StorePort=port") to be able to connect Browser with a specified trace store. New: UnrealInsights now loads 'Default' phase plugins. Find the plugin at: Animation/GameplayInsights Developer/TraceDataFiltering New: Analysis: Implemented reading from session while holding write lock. Added support for recursive session read/edit scopes and multiple concurrent session readers. Networking Insights Improvement: Changed the Networking Insights tabs to open automatically only if the trace has net events. Networking - Packets view Crash Fix: Fixed rare crash (trying to use an invalid ConnectionIndex). Bug Fix: Fixed infinite loop if zooming in (vertically) too much. Bug Fix: Fixed package selection being reset if selection is made too soon. New: Added key shortcuts for selected package / package range: Left/Right --> selects previous/next package Shift + Left/Right --> extends selection (multiple packages) toward left/right side Ctrl + Shift + Left/Right --> shrink selection (multiple packages) from left/right side New: Added highlight for packets with at least one event matching the filter (by NetId and/or by Event Type). The packets with no event matching the filter will be displayed with a faded color. Networking - Packet Content view New: Added "Find Packet" UI on top of packet breakdown visualization: Previous / Next buttons + PacketIndex editbox. New: Added "Find Event" UI: First / Previous / Next / Last buttons --> to navigate between (filtered) events By NetId checkbox + NetId editbox --> to enable filtering by NetId By Type checkbox + EventType readonly editbox --> to enable filtering by EventType. The EventType can be set by double clicking in NetStats or an event in Packet Content view. "Highlight" checkbox --> to enable the Packet Content view to highlight only the filtered events (shows all other events with faded colors). Bug Fix: Fixed sorting of net stat groups (always by name). Runtime Bug Fix: Fixed issue where scope IDs of zero could sometimes be traced out from dynamic scopes. New: Cleared up load time profiling instrumentation--mainly removed tracking and reporting of internal EDL events. Added support for having multiple async loading threads. Added AsyncLoading throughput graphs. New: Implemented tracing of network traffic to Unreal Insights, which allows capturing all game network traffic which can be visualized using the new Network Insights tab. Timing Insights Timing view Crash Fix: Fixed a rare crash where a log message had a null category name. Crash Fix: Fixed a rare crash when trying to display a timing event having no name (invalid timer). Bug Fix: Fixed the vertical scrolling issue with track being discovered on the fly (for example, total vertical height of all scrollable tracks is now properly computed all the time). Bug Fix: Fixed "auto snap to limits" for vertical scrolling to allow "any position inside view" when the total height of all tracks is less than the height of the view. Bug Fix: Fixed the tooltips not showing the hovered timing event correctly in some situations (when a previous event has EndTime equal with StartTime of hovered event). Improvement: Improved performance by using draw state caching for each track. Graph tracks Bug Fix: Fixed high-DPI issues with graph tracks. It fixes issues when Unreal Insights runs inside the UE4Editor. Support for high-DPI is still disabled in Unreal Insights standalone application. Bug Fix: Fixed issues arising from tracks being zero-size when they 'scale-in' at first. New: Added a colored bullet next to each series in the context menu (for easier correlation). Also the list of series is now scrollable in the context menu. Timers view Bug Fix: Fixed tooltips not appearing on Linux when the trace session is first opened. New: Added highlight/filtered mode for a selected timer. Double clicking a timing event (in the Timing view) or a timer (in the Timers list) will highlight all timing events of the same type (timer). The other timing events are displayed with a faded color. Double clicking an empty space (in the Timing view) or same filtered timer (in the Timers list) will reset the filter. New: Added the ability to scrub the Time Marker (the orange vertical line; one set by clicking a message in the Log view) by holding Ctrl or Shift and scrubbing the ruler track. The time marker now also displays the time, similar to the current time readout. New: Added warning red line on sides of timing view when trying to over-scroll using key shortcuts (Ctrl + Left/Right/Up/Down) or using Shift + Mouse Wheel. New: Added ">" in front of the name of the selected (pinned) track. New: Added Frame timing tracks (for Game and Rendering frames) to correlate frame index and duration of each frame with gpu/cpu timelines. The two tracks can be made visible using the R key shortcut or using the "Tracks" menu. Bug Fix: Fixed auto vertical zoom to allow bigger value ranges (was initially limited to around 10^6, now should be around 10^18 -- for a 100px height graph). New: Added toggle header state to Frame timing tracks. When expanded, the respective track will draw vertical marker lines on edges of each frame (overlapping the entire view). Improvement: Improved navigation by using a pinned track for vertical position reference when scrolling or zooming. Ex.: Pinned track will remain at a stable vertical position relative to mouse pointer while panning. Also, the pinned track will remain at a fixed vertical position when scrolling horizontally using the scrollbar. Bug Fix: Fixed Ctrl + Double Click in Timing view not keeping the selected timer visible in Timers view. Ctrl + Double Click action selects a new timing event (and the corresponding timer in the Timers list), but also selects the time range of the selected event. When time selection is updated the aggregation stats is re-computed and timers list is re-sorted, so the selected timer needs to be made visible again. Counters view Improvement: Changed double click on a counter to also turn on visibility of the Graph track when adding a graph series for the respective counter. UnrealPak New: Added an option (-forcecompress) to UnrealPak that forces all compressed files inside a pak to be compressed, even if the compression results in a larger file size. New: Added memory freezing target platform layout into DataDrivenPlatformInfo (removed from TargetPlatform) so that UnrealPak can get it—although UnrealPak no longer needs it with index freezing disabled. New: Added support for disabling secondary order in UnrealPak to reduce fragmentation in platforms that use delta patching. Datasmith Crash Fix: Fixed a crash when switching Variant Manager Variants during Standalone Game mode. Crash Fix: Fix a crash when importing a VRED variants file with invalid transform variants data. Crash Fix: Fixed several crash issues around the MDL Importer. Bug Fix: Resolved many issues related to SubLayers when using the USD Stage window. Bug Fix: Fixed SubLayers disappearing when reloading a stage with the USD Stage window Bug Fix: Fixed translucent VRED materials not importing with Translucency Blend Mode override. Bug Fix: Fixed object metadata not importing correctly from Cinema4D scenes. Bug Fix: Fixed behavior of Variant Manager Enum property captures so that the widgets now behave similarly to the Details panel. Bug Fix: Fixed imported animations ignoring source framerate when importing VRED and DeltaGen FBX files. Bug Fix: Fixed an issue where the string shown when dragging an editor Actor onto a Variant Manager Variant. Bug Fix: Fixed animations imported from DeltaGen FBX files behaving incorrectly when multiple animations per Actor are present on the same timeline. Bug Fix: Fixed some Cinema4D files not being imported completely due to allocation issues. Bug Fix: Fixed missing objects and scene hierarchy when importing some Cinema4D scenes. Bug Fix: Fixed an issue where the wrong transformation was applied on some parts. These parts have a transform with a pitch component of +/-90 deg (Rotation around the right axis (around Y axis)). During the import process, a lot of conversions of the transforms are done: World transform to Local transform to World transform and these repeated conversions generate numerical errors. A fix is now in place to avoid this problem. Bug Fix: Fixed additional internal Cinema4D import options being exposed to Blueprint and Python scripting. Bug Fix: Fixed Actors with zero scale being discarded when importing Cinema4D scenes. Bug Fix: Fixed materials properties not being captured when enabling auto-expose on the Variant Manager. Fix additional Actors being captured when dragging Actors to the level with auto-expose enabled. New: Added support for IES light brightness when importing Cinema4D scenes. Bug Fix: Fixed animations not resetting properly when reloading the USD stage from the USD Stage window. Bug Fix: Fixed light/fog color property captures in the Variant Manager not correctly detecting when they differed from the current values. Bug Fix: Resolved an issue with Variant Manager Actor selection clearing when selecting through the editor Actor's components on the Details panel. Bug Fix: Fixed objects not referenced by any IFC project being ignored importing IFC files. Bug Fix: Fixed extra geometry being imported from Cinema4D scenes with Connectors and Atom Arrays. Bug Fix: Fixed texture tags not being propagated from parent Actors when importing Cinema4D scenes with Datasmith. Bug Fix: Resolved incorrect transform when importing VRED or DeltaGen FBX scenes with Actors that have 90 degree rotations on the Y axis. Bug Fix: Applied fixes for materials being imported from Alias Wire files. Bug Fix: Fixed the reimport of GLTF scenes not recognizing asset changes in some circumstances. Bug Fix: Fixed errors when using the Construct Object from Class node with some Variant Manager classes on Editor Utility Blueprints. Bug Fix: Fixes for nested SwitchActors overriding each other's effects. Bug Fix: Fixed DeltaGen SwitchVariants targeting the wrong objects on imported scenes in some circumstances. Bug Fix: Fixed warnings when importing IFC scenes without lightmap generation. Bug Fix: Fixed incorrect SpecularGlossiness to MetalRoughness conversion when importing GLTF files. Also fixes all textures being imported as sRGB mode. Bug Fix: Fixed primitives not animating after their visibility was toggled from the USD Stage window. Bug Fix: Fixed some shader compilation errors in the MDL Importer. Bug Fix: Fixed an issue about duplicates in scene hierarchy from a Revit Datasmith export. Bug Fix: Fixed mesh collapsing mechanism incorporating prims that were invisible, when using the USD Stage window. Bug Fix: Fixed an issue with Rhino import for UV orientation and UV mapping transform. Bug Fix: Fixed reimport of Datasmith materials. Bug Fix: Fixed incorrect location of spawned Actors using Spawn Actors At Location operation. Bug Fix: Fixed an issue with the glTF Importer for importing materials with an ambient occlusion map. Bug Fix: Fixed an issue with the AxF Importer for the import of materials with a period (".") in their filenames. We now provide better error/warning messages. Bug Fix: Fixed rounding errors on LevelSequence section sizes for animations created with the USD Stage window. Bug Fix: Fixed wrong normals orientation on wall entities exported from Revit Datasmith. Bug Fix: Fixed illegal characters in texture name don't survive datasmith export. Bug Fix: Fixed an issuer where Dataprep Merge and Proxy Mesh Operators leave empty mesh Actors in the hierarchy. Bug Fix: Fixed an issue where importing a new LOD on a StaticMesh asset imported with Datasmith would prevent that asset from being reimported. Bug Fix: Fixed an issue during a DatasmithScene reimport where the parent material of a material instance would change and cause incorrect results. Bug Fix: Fixed a bug where reimporting a Datasmith scene with different import options would not have any effect on the imported asset. Bug Fix: Fix importing of linearly-interpolated animations from Cinema 4D scenes. Bug Fix: Fix Relative Location, Rotation and Scale properties not being captured by the auto-expose feature of the Variant Manager when the Actor is manipulated via the viewport handles, in some situations. Bug Fix: Fixed an issue during a DatasmithScene reimport where the user's changes would not be reapplied properly to the lights in the scene after the reimport. Bug Fix: Fixed some situations producing empty dummy rows on the USD Stage window primitive tree view. Bug Fix: Fixed imported DeltaGen materials being shared when they have different AO textures. Bug Fix: Fix incorrect camera transform correction when opening z-up USD stages with the USD Stage window. Bug Fix: Fixed an issue with Rhino that resulted in incorrect use of mapping channel id. Bug Fix: Fixed missing export timing information in datasmith files exported from revit. Bug Fix: Fixed an issue with glTF that resulted in geometry normals imported when degenerate triangles present. New: Enabled Blueprint runtime access to the IsActive function of the Variant asset New: The DatasmithCore module is now available at Runtime. New: We now support "Smooth", "EaseIn" and "EaseOut" curve interpolation modes when importing animations from DeltaGen FBX files. New: Use mesh labels for imported Static Mesh asset names when importing IFC scenes. New: It's now possible to reorder Variant Manager property capture rows. New: The Variant Manager Actor binding selection is now synchronized with the editor Actor selection. New: We now re-use identical Static Meshes when importing Cinema4D scenes. New: Added sRGB flag control for imported Datasmith texture elements. New: It's now possible to rebind Actors to existing Actor bindings on the Variant Manager. New: Added support for loading (and quickly toggling the visibility of) USD purposes on prims imported with the USD Stage window. New: Added options to control purposes to load and view, as well as payload behavior when opening USD stages via the USD Stage window. New: Added color picker widgets for captured color and linear color properties in the VariantManager. New: Auxiliary files will now only be auto-completed on import options dialog for VRED and DeltaGen scenes if they match the filename of the scene exactly. New: VariantManager FunctionCallers now also provide access to the Variant, VariantSet and LevelVariantSets asset. New: Added support for capturing CineCamera properties with the Variant Manager. New: Added additional buttons on the top of the Variant Manager window for adding Actor bindings and property captures. New: Save CAD patch into Static Mesh polygon group for re-use in Geometry tools. New: Added support for animation and track delays when importing DeltaGen animations from FBX files. New: Added sharing of a common USD stage cache between the USD Stage window and Python scripting. New: Enabled support for Shininess textures imported from DeltaGen FBX files. New: We now allow controlling the offset and scale of SubLayer animations from USD Stages (when importing using the USD Stage window) by using the Sequencer and SubSequence tracks. New: Added combo boxes to edit primitive Kind and Purpose on the USD Stage window. New: Improved the logging and display of all errors and warnings emitted from the USD SDK when using the USD Importer. modules. New: Added support for Rhino sub-object materials to be imported. New: Added support for Rhino extrusion render meshes to be imported. New: Added support for undo/redo for most actions when using the USD Stage window. New: Added the PLMXML Importer. New: Added support for Rhino pivot set to geometry bbox center. New: Fixed incorrect names on Subcomponents imported with Datasmith. New: Updated Dataprep operation "Spawn Actors At Location" to support Blueprint based Actors. New: We now support the option to export Revit Structural Steel Connections. New: Added an option to Dataprep for folder importer to use multi-process import to speedup import on multi-core CPUs. New: You can now batch export Revit views. New: Implemented filter by layer for Dataprep. New: Revit can now export proper pivots location on entities. New: Export Revit.DB.FamiliyInstance.Mirrored/Handflipped/Facefliiped as tags. New: Added support for exporting Revit PBR materials. New: Added a TranslateScene method to populate the Datasmith scene. New: Implement an operation to add tags to Actor/component for Dataprep.. New: Added an option in Dataprep to implement a filter for Actors based on the number of triangles and/or vertices. New: Added an option in Dataprep to implement an operation to add meta-data to asset/Actor/component. New: Added support for importing points from Rhino models. New: Added Entity Tags to Rhino. New: Implemented "Randomize Transform" operation for Dataprep. New: Implemented consolidate operation for Dataprep. New: Added support for the MDL Importer on Linux. New: Added an option to Rhino import that lets you choose on import whether to have Datasmith tessellate all parametric surfaces for you or whether you want to reuse triangular meshes previously created by Rhino and stored in your Rhino scene. New: Reworked asset preview widget to become more consistent with the Level Editor's Content Browser. New: Added the ability to hide/show Actors to the scene outliner in the Dataprep Editor. New: Export Pipe/Duct Insulation and Lining in correct hierarchy. Improvement: We redid how Variant thumbnails are stored in the Variant Manager allowing setting thumbnails from file and fetching. thumbnails via Blueprint at runtime. Also fixes bugs with thumbnails sporadically disappearing. Improvement: Improved Rhino import to avoid cracks in the mesh. Improvement: Improved performance of datasmith CAD scene save/reload. Improvement: Improved performance of some Datasmith reimport. Deprecated: Previous versions of VRED and DeltaGen FBX importers have had their Blueprints cleaned up. To upgrade, scenes that used those Blueprints need to be imported again, or those old Blueprints need to be copied over from a previous Unreal Engine version. API Change: The python import API for Datasmith now reflects the internal processing. To now access the scene representation, you can call "translate_scene" method on the scene, and the workflow looks like this: call construct_datasmith_scene_from_file() Edit options... call translate_scene() Edit scene... call import_scene() API Change: The DatasmithCore module has been moved from "Engine\Source\Developer\Datasmith\DatasmithCore" to "Engine\Source\Runtime\Datasmith\DatasmithCore". DevTools AutomationTool Bug Fix: AutomationTool now looks for the IsBuildMachine environment variable set to 1 to set an implicit -buildmachine argument, rather than inferring it from uebp_LOCAL_ROOT being set. The uebp_LOCAL_ROOT variable is set at runtime, which was causing the IsBuildMachine flag to be set incorrectly for child instances. New: Added support for AutomationTool scripts when using foreign projects, and for game projects to output binaries under their own project directory. When compiling *.Automation.csproj files, the $(EngineDir) property will be set to the appropriate engine directory, allowing the project to resolve assembly references to the correct location. New: Refactored ini key stripping in UAT to be driven by config settings—exposed in the project packaging settings. Added the ability to set a list of config sections to strip while staging. Improvement: Improved startup time when running the BuildCookRun command by lazily constructing metadata for every project in the branch. BuildGraph New: Added option for building the installed engine with VS2019 instead of VS2017: -set:VS2019=true UnrealBuildTool Crash Fix: Fixed UnrealBuildTool crashing (rare) on launch. Bug Fix: Fixed issues where overriding the Visual Studio version from the command-line would not be used in targets, resulting in errors when an option like setting the compiler version was also used. Some command-line arguments are now passed to target creation (specifically, Visual Studio version). Also fixed so that HololensTarget is actually getting configured from setting sources, including command-line and BuildConfiguration.xml which we added attributes to match how Windows targets can be configured. Bug Fix: Fixed Rider projects failing to generate when a program was defined in the game projects folder—this is similar to how VS project generation works. Bug Fix: Added the engine version as a dependency for checking that the module rules assembly was up to date, fixing issues where a new UBT version can attempt to use an old version of the modules rules assembly. Bug Fix: Fixed issue with installed projects trying to use VSCode and getting incorrect IntelliSense for engine source code. Bug Fix: Added detection of all .NET Framework versions, throwing BuildException if none are found. Bug Fix: Fixed issue in UBT *.ini parsing not combining multiple escaped newlines into a single line. Bug Fix: Fixed log file being written to the Engine directory (even in installed builds). The log is now written to AppData/Local on Windows. Bug Fix: PVS Studio intermediates are now stored in a separate directory to avoid cloberring intermediates for regular builds. Bug Fix: The makefile is now invalidated if resource files or other leaf prerequisites are removed. Bug Fix: Read-only files are now removed when cleaning a target. Bug Fix: Any targets that can't be built on the current host platform are ignored when generating Intellisense data for project files. Bug Fix: Prevent writing compiled assemblies to installed engine directory. Write them to the AppData folder instead. Bug Fix: The C++ standard version is now propagated to the generated project files in order to use the correct environment for intellisense. Bug Fix: Fixed missing UAT/UBT log output on Mac. Standard macros set through the DefineConstants property (for example, "TRACE") were being overwritten by MonoCS, causing TraceLog() function calls to be compiled out. Bug Fix: - Removed extra Engine directory under the UE4 project above the Platform extension directory. New: Added Thread_UseAllCpuGroups to UnrealBuildTool config that enables UBT to use both CPU groups on high-core systems (such as 64-core ThreadRippers). New: Visual Studio solution files and project files are now generated deterministically by UnrealBuildTool to reduce the frequency of solution and project reloads. New: Improved IncludeTool error output when encountering unbalanced curly braces or conditional preprocessor macro blocks. New: Added option to build a list of plugins while not enabling them per target. New: The set of target configurations included in generated project files can now be configured via BuildConfiguration.xml. ProjectFileGenerator/Configurations New: Visual Studio 2019 is now the default compiler and IDE on Windows. New: Added a BuildConfiguration setting for forcing UBT to be built in debug configuration. VCProjectFileGenerator/bBuildUBTInDebug New: Single file compile now works correctly with generated *.cpp files. New: JetBrains Rider is now supported as an IDE from the editor and when generating project files. New: An explicit error will occur if engine modules are out of date when launching the editor, rather than UBT trying to build them and triggering sharing violation errors. New: Added an explicit check for output paths being longer than MAX_PATH on Windows. New: Added support for different analysis modes in the PVS toolchain. Defaults to just using general analysis, but other options can be configured by setting TargetRules.WindowsPlatform.PVS.ModeFlags or by setting TargetRules.WindowsPlatform.PVS.UseApplicationSettings = true to read from the PVS config file. New: A warning is now output if any intermediate path is > 200 characters under the Unreal Engine root directory, to trap MAX_PATH issues running the engine from longer base directories. Deprecated: Generating project files by passing -2017 or -2019 is discouraged—this is due to the lack of visibility this provides to other engine tools that generate project files. UnrealGameSync Bug Fix: Fixed a problem with auto-update not working correctly if the application is run directly (such as via a pinned start menu icon). Bug Fix: The editor is built with the selected project file even when running a content only project. This ensures that any non-default plugins are enabled. Bug Fix: The latest checked-in version of the UGS config file is now used to determine sync filter and stream settings, rather than the local version. New: Added support for parallel syncing. UGS now internally spawns multiple threads to sync on, and attempts to distribute sync commands evenly between them. Two threads are used by default, but this value can be adjusted from the Options > Application Settings menu. Bug Fix: Fixed syncing of individual changes ignoring selected sync filters. New: By setting the Enable=false parameter on a custom sync category entry in the project config file, sync categories can now be configured as disabled by default. New: The recent projects list is now capped at 10 entries. New: Each project tab now has a color tint matching the configured color tint for the editor frame in that branch. UnrealHeaderTool Bug Fix: Exposed the new 'UncookedOnly' UnrealBuildTool module type to UnrealHeaderTool and added 'PKG_UncookedOnly' as the corresponding package flag. Bug Fix: Fixed skipping over PURE_VIRTUAL macros in UnrealHeaderTool. New: To support UnrealHeaderTool error reporting, enabled exception support when building with Clang for Windows. New: Removed requirement for "= true" on AllowPrivateAccess metadata tag. Improvement: UnrealHeaderTool no longer exports Z_Construct_UFunction declarations. UnrealVS Bug Fix: SingleFileCompile mode now works when compiling with PVS static analysis. Improvement: XAML changes to the UnrealVS batch builder panel: Columns are now resizable Label colors come from VS themes and are visible with the dark theme Alignment fixed and dead space reduced Editor Crash Fix: Fixed an issue where DisasterRecovery crashed if the recovery session database could not be open or was corrupted. Fixed an issue where DisasterRecovery was freezing the Editor if the new session database could not be created successfully. Crash Fix: Fixed a crash that occurred when undoing the painting on a Skeletal Mesh. Crash Fix: Fixed a crash that occurred on certain files when using the speed tree importer. Mainly it occurred with those that had multiple assets with the same name. Crash Fix: Fixed a crash occurring with skeletal mesh bake material, and incorrect ID remapping. Crash Fix: Disabled input in Level Editor Viewports when a map is being torn down. Speculative fix - the call stack in the bug looks like we think we have a valid actor pointer, but it is probably getting garbage collected. The log is outputting level unload messages right before the crash. Crash Fix: Fixed a crash that occurred when a selected spline point was removed from the Details panel. Crash Fix: Fixed a crash that happened when compiling a running Editor Utility Blueprint that contained a Details view. Rebuilding the Details view widget after a compile is now a synchronous operation. Compiling multiple times caused the underlying widgets to be cleaned up on the main thread from the compile. The asynchronous Details view task created widgets that were then destroyed by the main thread's compile before finishing the task. Crash Fix: Crash fix for trying to open the Asset editor for an invalid Asset. Crash Fix: Fixed a crash occurring when a user deleted an actor from the level. Crash Fix: Fixed a crash occurring in the Static Mesh Editor after copy-pasting primitives. Crash Fix: Fixed a crash occurring during garbage collection when UObject references a SAssetViewItem(). Crash Fix: HISM: fix crash when manually setting a instancing random seed to zero. Crash Fix: Fixed a crash occurring with SPropertyEditorInteractiveActorPicker. Crash Fix: Fixed a crash occurring when saving over an existing level and pressing the ESC key. Crash Fix: Fixed a random crash observed when enabling or disabling disaster recovery during a Multi-User session because 'compiled in' or 'memory only' packages were hot-reloaded. Crash Fix: EditConditions no longer crash when their parent object gets deleted from under them, such as when editing a Blueprint CDO and then compiling. Crash Fix: Fixed a crash that occurs when trying to get SlateApplication in ShouldThrottleCPUUsage if called from a commandlet. Crash Fix: Fixed a crash that occurred after clicking launch if failed and message log was open to 'Packaging Results'. Bug Fix: Fixed the Disaster Recovery to prevent prompting the user to recover a session only containing non-recoverable activities (Multi-User activities). Bug Fix: Editor now logs an error if there is an issue importing a collision model. Bug Fix: Fixed an issue occurring when a static mesh is re-imported with the "Combine Mesh" option off. If the FBX has many different static meshes, it can pick the wrong mesh when re-importing one of them. Bug Fix: Fixed an issue with custom LOD workflow; the Insert LOD into the base mesh function was not updated to work with the latest skeletal mesh build refactor. Bug Fix: Made FBX Import accessible using Blueprint and Python. Bug Fix: Fixed some issues in the proxy merge tool. Static parameters of materials instances are now properly used. In some cases, the tool would only use the first section of a mesh when backing the texture of the merged mesh. Bug Fix: Made sure post-build refactored skeletal mesh has its dependent LODs regenerated after importing the base mesh. Bug Fix: When building a skeletal mesh, Editor now forces the Tangent and Normal options to True if the source data does not have the tangents or normals data. Bug Fix: The mesh click tool now uses hit proxies to find initially clicked Actors. Bug Fix: Fixed an issue in the Node Graph where a deferred Zoom to Fit would be lost due to deferred UI updates. Bug Fix: Added opacity texture to FBX Material instance imports. When importing an FBX, you have the option to import Materials as instances of a base Material, as opposed to creating new Material Graphs for each FBX Material. This change adds the option to import anything connected to the TransparencyColor. Bug Fix: Added support for content plugins in content-only projects. These were not being staged due to a unique target receipt for the project being generated. Bug Fix: When importing animation from FBX files, Editor now only merges Anim layers as needed. Unreal Engine requires all animations be merged to a single layer, so you cannot skip the merge entirely. This change only performs the merge when necessary, keeping the curve fidelity as long as it is part of an AnimStack with a single member. FbxCurveAPI is now using the same code as the FBX animation curve importer. The curve import now supports "tangent break" and "User tangent". Removed bad unit conversion for curve tangent. Bug Fix: Fixed an issue with HLOD Volumes and meshes' offset from origin. Bug Fix: The paint tool and current/next texture is turned off if there are not any paintable textures on the mesh. Bug Fix: Fixed skeletal mesh conversion so that it uses the LODMaterialMap when converting to a static mesh. Bug Fix: Fixed the mesh painting tools to allow navigation if the user is not clicking or painting on a paintable component. Bug Fix: Fixed an issue that prevents shadows from being rendered for dynamically-built static meshes. Added an option to build simple collisions when building static meshes. Bug Fix: Fixed an issue with spline component visualizer regression, which was preventing interaction with spline points in certain circumstances. Bug Fix: Fixed an issue where LoadLevelInstance location offset was not affecting the BSP geometry in a dynamically loaded level. Now a level's ModelComponents also have the same transform applied. This fixes their rendering, physics and static lighting. Bug Fix: Fixed an issue with propagating vertex color to a custom LOD. Bug Fix: Editor no longer tests against zero; instead it tests against nearly zero when computing the tangents for skeletal mesh. Bug Fix: Added missing shutdown/tool ending code to Mesh Paint tools and Mesh Paint mode. Bug Fix: Added a configurable threshold for Morph Target position delta. Also makes the other threshold available in the skeletal mesh build settings. Bug Fix: Fixed the static mesh re-import override of complex collisions. Bug Fix: The asset registry tags are now restored properly after a static mesh re-import. Bug Fix: Moved RawSkeletalMeshData into a private uasset in order to reduce the skeletal mesh asset size when there are a lot of morph targets. Bug Fix: Class View no longer loses selection when losing focus. Bug Fix: Fixed a bug where combo boxes could affect the wrong field when choosing the type for map variables. Bug Fix: Removed Editor Modes are also removed from the set of default modes for a mode manager. Speculative fix for an issue where Activate Mode was attempting to activate an invalid Editor Mode. When modes are destroyed, they may not be removed out of the default modes for the manager, which may cause an Activate after a deactivate to fail. Bug Fix: Mesh Texture Painting no longer allows painting on source textures with more bytes per pixel than the created render target data. Bug Fix: When using the "asset import tasks" in Python or in Blueprints, if you import multiple assets, the result can now return more than one asset per factory import. Bug Fix: Fixed an issue with starting the editor with "-vulkan". Bug Fix: Fixed verbosity of Multi-User/DisasterRecover endpoint discovery/timeout to prevent spamming the log. Bug Fix: Fixed transaction buffer being cleared when hiding Levels. Bug Fix: Updated the look and feel of the lock icons displayed on top of an Asset in the Content Browser when a user in a Multi-User session locked the asset. Bug Fix: Fixed some sub-optimal UV layout packing on some geometries. Bug Fix: Perforce Plugin: Use -P4Passwd cmd argument value when connecting to Perforce. Bug Fix: HideCategories on components in an actor now correctly hides the categories when the actor is selected. Bug Fix: Fixed dynamic tool menu entries appearing at the end of a section. Bug Fix: The New Project dialog no longer shows a second redundant Open button. Bug Fix: Updated session front end to properly hold on to log messages, even if they are currently filtered out. Bug Fix: Fixed a bug with HighResScreenshot arguments being parsed incorrectly. Bug Fix: Fixed validity checks not firing anymore and displaying an error when creating/removing project files on disk with the New Project dialog wizard open. Bug Fix: Fixed an issue where Editor 'SessionSummary' analytic event was not being sent. Bug Fix: The AttachToActor menu is no longer allowed to shrink. Bug Fix: "Snap Pivot to Vertex" now works by holding Alt+V. Bug Fix: Fixed a memory leak in SOutputLog. Bug Fix: The AssetEditorSubsystem now handles windows again, fixing "Restore Open Assets Tabs on Restart". Bug Fix: Fixed an ensure that was generating with the input binding editor panel. Bug Fix: Added a message when there are no Recent Projects available, because it was unclear what was wrong when there were none showing. Bug Fix: Fixed source control errors occurring during rename of file open for add. Bug Fix: Fixed issues where a temporarily set realtime state is saved between editor sessions causing users to be unaware that their viewport is not realtime. Shutting down the editor during PIE or remote desktop were two such cases. Bug Fix: The Component menu in the Actor Details view now filters out allowed classes based on UClassViewerSettings AllowedClasses. Bug Fix: Fixed analytics to emit the Editor SessionSummary event using the correct AppId, AppVersion, SessionId and UserId when emitted from another instance or process. Bug Fix: When you add an entry to an array in a component that is being edited in a locked Details view, it no longer generates asserts. Bug Fix: Fixed an issue where disaster recovery remote endpoint was not re-registering with MessageBus UDP transport layer restarts or auto-repairs. Bug Fix: Added tooltips to Save As error labels. Bug Fix: Fixed VS accessor to make sure source files are placed in the precise module that was selected, and not just a module beginning with the same name. Bug Fix: HideCategories on components in an actor now correctly hides the categories from the Details panel when the Actor is selected. Bug Fix: Font editor monitor DPI scale fixes. Bug Fix: Edit conditions no longer show themselves as duplicate inline toggles when shared between multiple properties. Bug Fix: Fixed Disaster Recovery plugin to prevent reapplying changes that were discarded during the session. Bug Fix: Disable "Hide Folders Containing Only Hidden Actors" for now. Bug Fix: Unregister all delegates when shutting down FEditorSessionSummaryWriter. This is necessary because Shutdown() is called even when the app is not shutting down, but rather when we are only shutting down analytics such as when toggling the sending of analytics in the editor settings. Bug Fix: Set texture render target 2d's default size to 1x1 so that reset to defaults tooltip will say 1 instead of 0. Bug Fix: Changed utility widgets to rebuild on compile instead of on reinstance, because the BlueprintReinstanced delegate fires when unrelated Blueprints are reinstanced as well. Bug Fix: Empty implementations of IPropertyTypeCustomization::CustomizeHeader() will no longer break containers by hiding the header row of a customized struct when the struct is viewed in a container. Bug Fix: Game default map is now correctly set to Minimal_Default when creating a project with Include Starter Content set. Bug Fix: Typo fix: changed PersistenWorld to PersistentWorld. Bug Fix: Fixed a bug where the 'Cancel' button sometimes couldn't be clicked on popup windows. Bug Fix: Fixed issue where the color picker wouldn't update color properties on components after the mouse was released for the first time. Bug Fix: Fixed an assertion occurring when clearing Disaster Recovery widget search field entry. Bug Fix: Source Control: Unattended package save now handles adding new files to source control in more situations. Bug Fix: We now use the display name for asset thumbnails. Bug Fix: Fixed the Multi-User active session widget so that it displays the client location as "N/A (-game)" if the client is in -game, because clients in -game do not emit their presence/level. Bug Fix: When you create a Blank project with starter content enabled, it now opens the Minimal_Default map. Bug Fix: Memory stomp detection in FTickableEditorObject. Bug Fix: Moving a folder with a numeric name (eg. "1", "1234") to another level in the hierarchy no longer causes an infinite loop. Bug Fix: Duplicate names for preview assets should no longer appear on the "Close Asset Editors" dialog. # No longer add preview type "assets" which return false to IsAsset() call as opened assets. Bug Fix: Fixed uncommon scenario which caused some assets to be duplicated when renaming. Bug Fix: We now clamp the Niagara parameters menu to 700 to avoid a flickering bug. Bug Fix: Trying to reload an in-memory-only no longer unmarks it as dirty. Bug Fix: Fixed an issue where pasted arrays would not perform deep copies of instanced objects, because children properties were not being rebuilt. Bug Fix: Combo boxes for enum bitflags will no longer display enum values that have been marked with UMETA(Hidden). Bug Fix: The Class Viewer now scrolls to the selected item on refresh. Bug Fix: When starter content is not available because the user's FeaturePacks folder is missing or empty, the option will not be selected by default in the New Project settings panel and then disappears when trying to reselect it. Bug Fix: Fixed StaticMeshActor losing the root component scale when placed with Surface Snapping enabled in Editor Viewport. # The inverse of the entire matrix was being applied for surface rotation snapping. This caused the scale to be reduced when the value of the transform was used in PlaceActor to multiply with the default transform information. Bug Fix: New Project wizard now correctly lists a Blank project when the Templates folder has not been synced. Bug Fix: Fixed an issue where GUIDs wouldn't be generated properly when they belong to a component attached to a child BP class. Bug Fix: Fixed an issue where the object selector UI was not working if the object was not loaded and had the same name as another object, such as 'Texture'. Bug Fix: Updated package autosaver to suppress the restore dialog when running in unattended mode. Bug Fix: - Added a small version of map check image brush so it shows up properly when using small icons. Bug Fix: Selected text is discernible in the Output log with default settings. Reverted the default setting for the selected text background color to what it was before. Bug Fix: Fixed the Disaster Recovery logs that could spam many 'lost server', 'discovered server' or 'timeout' logs. Bug Fix: Eliminated the need to archive a Disaster Recovery session after a crash or normal exit to ensure the recovery system exit and release all resources in a timely manner, ensuring the crashed session is immediately available for restoration on Editor restart. Added routine to clean up temporary files left over by Disaster Recovery in the project intermediary folder after a crash. Bug Fix: Fixed Multi-User/DisasterRecovery client not canceling in-flight connection task when Connect()/Disconnect() were called in a very short interval. Bug Fix: Fixed the Disaster Recovery service using a dangling reference when unmounting a repository. Fixed the Disaster Recovery service to unregister all request handlers on shutdown. Fixed the Disaster Recovery service to report a repository as mounted if the requesting client already mounted it in a previous request. Fixed Disaster Recovery service to clean up its repository database when a repository doesn't exist anymore on disk. Bug Fix: Fixed potential Disaster Recovery service name collision on the network that could happen if two Editor instances on two different machines get the same Editor process ID. Bug Fix: PIE sessions which launch a new standalone window no longer double render debug safe zones and all PIE options are more consistent with handling r.DebugSafeZone.XXXX cvars. Bug Fix: Reordered instanced components are no longer set to transient and disassociated from the current world. Bug Fix: With SPropertyEditorNumeric, if the value has changed using a slider then set the value without flags to ensure that all the callbacks are called. Bug Fix: Now the unique name is created from the same outer the object will be spawned into. Bug Fix: Stylus input plugin usage of Windows Ink API has been changed, to prevent an occasional deadlock that occurred when moving a mouse or stylus between open windows in the editor. Bug Fix: Fixed an issue where actor picker mode's tooltip was only present on first use. Bug Fix: Fixed analytics to compute session duration and idle time consistently. Bug Fix: Fixed an issue where Place Actor was not showing up in the Level Editor viewport context menu. Bug Fix: Changed the delta on the Number of Players slider so it does not get stuck on the max value. Bug Fix: Changed FAssetEditorToolkit to log invalid saved Assets instead of crashing, because individual Asset editors may not be checking for Asset validity. Bug Fix: Now the mesh adapter is checked to make sure it is valid before using it for mesh paint. Bug Fix: Fixed an issue where Lightmap UV channel was not set to correct index after import. Bug Fix: Fixed an issue on Linux where the EditCondition on a UProperty would sometimes not evaluate properly. Bug Fix: FSoftObjectProperty now correctly respects CanEditChange for properties when edited from the Details panel. Bug Fix: Fixed an issue where Message Bus UDP Messaging static endpoints were not being properly sent when the unicast endpoint was forced to a specific interface. Modified the Multi-User Server launch to bind a different port on the unicast endpoint if the port of the editor is set when it transfers its settings to it. Also added a way to explicitly choose a Multi-User server port for its unicast endpoint when launching it from the editor. Bug Fix: Fixed an issue that occurred during a FBX Skeletal Mesh import, where custom properties on imported skeletons would be applied on unrelated skeletal meshes. Bug Fix: Fixed an issue where the FBX import option "Material Search Location" was sometimes being ignored. Bug Fix: Fixed an invalid material baking result that occurred when "Reuse Mesh Lightmap UVs" is used with a mirrored (negative scale) mesh. Bug Fix: Fixed a race condition in Editor analytic code. Fixed an issue with idle time returned by Editor analytic when Slate did not register any user interaction since the Editor started. New: Added self-registering undo/redo client Reduces boilerplate for the common case where objects register on construction and unregister on destruction. New: Message dialog text and results are now logged. New: Added text verification to SEditableTextBox, similar to SInlineEditableTextBox. New: Added system to whitelist commands. New: FSlowTask improvements: ShouldCancel now ticks the UI from time to time (only on the game thread, max 5 times per second) so that the Cancel button interaction can be checked without artificially calling EnterProgressFrame. Added checks to make sure that EnterProgressFrame and ShouldCancel are only called from the game thread. New: Perforce source control provider now detects login expiration and displays error status in level editor toolbar. New: Niagara Override Parameters are now searchable in the Actor Details view. New: Calls to OpenMsgDlgInt now use FMessageDialog::Open instead. New: Fixed an issue where a long file system scan by DDC caused the system to hang when the Editor was shut down. New: Added a virtual CreateTitleRightWidget() method to SGraphNode, to allow for adding widgets to the far right of the title in a graph node. New: Updated the Tool menu to use a generic blacklist filter. New: Plugins can now disable tabs in the tab manager. New: Added support for command line files to be found inside project, and project plugins' directories. New: Added option to constrain tool menu height. New: When using UI scaling on a high DPI monitor, lines to pins on nodes outside the current viewport now draw correctly. New: Improved the performance of proxy mesh creation. New: When the engine detects that an abnormal shutdown has occurred, we now spoof a crash report and send it from the CrashReportClientApp. "Abnormal shutdown" defined as: the shutdown did not appear to use a known exit path. New: TransactionSystem: Exposed SnapshotTransactionBuffer to Blueprint as SnapshotObject. New: Multi-User: Display which packages are the culprit when failing to join a Multi-User session due to in-memory changes (dirty packages). New: Improved disaster recovery widget: Added 'Show Unrecoverable Activities', 'Show Package Activities' and 'Show Transaction Activities' to the View options. Prevented the 'Recover Through' button from being overlayed on top of 'unrecoverable' activities. Improved the tooltips for 'Recover All' and 'Recover Through' button to make it clear what is recovered and what is not. New: Improved material baking performance. New: Added support for "view modifiers" for editor viewports. These are delegates that allow plugins and other 3rd party code to affect the transform of the viewport. Removed the call to UpdateViewForLockedActorin CalcSceneViewbecause that violated the expectation that this method was only computing some view information based on the current transform/FOV/so on. New: Multi-User: adjusted base transaction transient property filters path in config files. New: Improved the performance of convex hull computation. New: EditCondition now supports bitflags that use enums. Example: MyIntProperty & MyEnum::Flag (and negation by tacking on == false). New: Modified AsyncTaskNotification to add a prompt state, along with a hyperlink for slate-based notification. Added a function to pulse basic notification with a specified glow color while they are pending. New: Multi-User: Added a setting for a different connection validation mode, allowing for a prompt on connection instead of hard failure. Source Control validation now checks if checked out files are actually modified, instead of just being checked out; if validation mode is soft, Multi-User connections are allowed to proceed. There is now hot reload for dirty packages on connection. New: Multi-User: Exposed Multi-User Default Connect to Blueprint along with a mean to query connection errors. New: Added a parameter to thumbnail renderers' Draw function, so that we can instruct them whether to clear the render target or not. This is for fixing thumbnails that render multiple sub-thumbnails to the same render target. New: Added a sine wave deformer to the DisplaceMeshTool. New: Added Perlin noise option to the DisplaceMeshTool. New: CoreDelegates::FCrashOverrideParameters now includes values for SendUsageData and SendUnattendedBugReports. These values are set when the values change in UAnalyticsPrivacySettings and UCrashReportsPrivacySettings respectively. New: Improved the performance of shader compilation when using Incredibuild. New: Added handling of UObject pointers to edit conditions to enable equality, inequality and nullptr checks of UObject pointers in the form of: MyProperty != nullptr, or MyProperty == OtherProperty. New: Added an option to set a custom StarterContent feature pack in a template. This is to allow different sets of starter content to be specified for different project types. The New Project wizard will always create an empty Content directory. New: Optimized AssetView filtering for DataTables. New: Updated actor selection Blueprint nodes to call modify on the selection state, allowing them to be picked up by active transactions. New: Updated "Prompt When Adding To Level Before Checkout" to use cached source control state, instead of running a fresh query each time an actor is added. New: Hidden settings in the new project wizard will now write no values to the config file. This puts more burden on the template writer, but should not cause any template settings to be overwritten by hidden settings. New: EditorViewportClient now supports temporarily overriding the current EngineShowFlags, to temporarily disable rendering features like TAA, Motion Blur, and so on within a specific EdMode. New: Play-In-Editor failures due to invalid credential indexes now give better error logging, instead of a generic "Login Failed" which was easily confused with username/password issues. New: Raised the minimum slot height of the Actor Details panel from 20 to 80 to reduce overlap. New: Multiple improvements to the "View Mode" menu of the Viewport, including a feature request from JIRA UE-81469. Most of these improvements are related to the submenu of "View Mode" called "Buffer Visualization": Issue 1/6 (the JIRA itself): The "View Mode" UI displays as its title the name on the selected category or subcategory chosen by the user (or otherwise the default one). However, this behavior was not consistent across different categories. For the "Buffer Visualization" category, if any of its subcategories were chosen, "View Mode" was only displaying "Buffer Visualization" as title, rather than the particular subcategory chosen. This was not the case for the other 2 categories ("Level of Detail" and "Optimization Viewmode"). Solution 1: The displayed name of "View Mode" is now forced to be the exact chosen (sub)category, regardless of the category ("Buffer Visualization", "Level of Detail", or "Optimization Viewmode"). Issue 2/6: The name displayed as the selected "View Mode" was being assigned in parallel with the displayed name of the buttons representing each (sub)category. This resulted in some categories and subcategories having different names in the buttons than in the "View Mode" label itself. The behavior was also not consistent across the different menus inside "View Mode". Solution 2: The displayed name of the chosen label is forced to match the displayed name of the button itself. Issue 3/6: "Optimization Viewmodes" was the only category without an icon. Solution 3: Icon assigned to one of the icons of its subcategories. In particular, the one from "Quad Overdraw". Issue 4/6: "Buffer visualization" was the only category without a proper tooltip, it was displaying a FName, rather than a more human-readable FText. Solution 4: Assigned its tooltip to the FText field of DisplayedName. Issue 5/6: When the subcategory "Overview" (of "Buffer Visualization") was chosen, the displayed name on each individual subwindow that are generated corresponded with limited FNames (rather than a more human-readable text). Solution 5: Assigned their value to the FTexts corresponding to their DisplayedName value. Issue 6/6: Inside the possible Buffer Visualization Materials, LightingModeland ShadingModelare being used interchangeably all over the code, leading to mismatches and confusion for the user. For example, UMaterialInterfaceand UMaterialuse LightingModel, while FBufferVisualizationData(thus FBufferVisualizationMenuCommandsand FEditorViewportClient) use ShadingModel. New: Disaster Recovery now has improved scalability, memory footprint and performance. New: Changed the default behavior of the Interactive Tools Framework to auto-accept instead of auto-cancel when the user is swapping away from tools. New: The mesh paint brushes now adapt to the size of the selected mesh. New: Added a warning that displays when passing the wrong type of option class during FBX automated import, instead of silently ignoring the given options. New: When reimporting a FBX file and a material conflict occurs, the user has the option to "Reset to FBX" to resolve the conflict. The conflict resolution was UI not available during automated reimport, so we added a new option called "bResetToFbxOnMaterialConflict" to the automated import. When enabled, any material conflict that may occur during an automated reimport will apply the "Reset To FBX" behaviour automatically. New: We have fixed a number of issues with mouse high-precision mode behavior over Remote Desktop. New: Added the "Res=1280x720wf" format command line argument to set window resolution in a compact way. New: Volumetrics: Updated SubUVMaker by adding the ability to export EXR textures to disk. Made the Blueprint capable of being edited during simulation, which will trigger a recapture using the new settings automatically. New: Volumetrics: Updated SubUV Maker tool to support Motion Vectors, Baked Lighting, Temperature, Normals. Flipbook_MotionVectors Function. New: Volumetrics: Voxelization BP now supports any Actor type as long as it has a MeshComponent. Included examples for using meshes with WPO as forces. New: Added OnEnginePreExit() delegate to core delegates. This delegate is fired off before closing out the AssetTools, WorldBrowser and AssetRegistry modules, for handling shutdown tasks that would require these. New: You can now add metadata to FString and FName UPROPERTYs; this metadata specifies a dynamically called function to get the options that are available to the user in a Details panel. meta=(GetOptions="FuncName"), UFUNCTION() TArray<FString> FuncName() const; New: The Disaster Recovery Hub UI is now accessible from the Editor menu by selecting Window > Developer Tools > Recovery Hub where you can explore, import and inspect recovery sessions. Added a Disaster Recovery user setting to configure the root session repository folder (folder containing the recovery sessions). Added a Disaster Recovery user settings to configure how many 'recent' and 'imported' sessions should be kept in the history. Content Browser Bug Fix: Fixed ensure that occurred while scrolling in the Content Browser. Bug Fix: Columns in the Content Browser's column view can no longer be resized until they are uneditable. Bug Fix: Relaxed project name assumptions when estimating the cook length for an asset. Bug Fix: Fixed an issue where thumbnail editing mode was not responding to mouse drags. Bug Fix: Fixed IsValidFolderPathForCreate so it uses the path on disk for maximum length check. Bug Fix: We now hide dynamic collections from the reference viewer list, as they require frontend filters. Bug Fix: Updated "Submit to Source Control" to include plugin content. Bug Fix: Pressing the delete key now deletes a folder when the folder sources view is focused, even if assets are also selected in asset view. Bug Fix: Text properties now write as basic strings when exporting a Data Table to CSV for diffing. Bug Fix: Fixed an isee where Show Engine Content was a disabled option in debug builds. New: Added "Remove All But This" context menu option to the content browser filters list. New: Fixed a delete pop-up that pointed to the last selected folder after the folder was deleted in the content browser. New: Split asset path validation into two separate checks, to provide clearer messaging to users on length rules. New: Added a "docked" mode for the collections view. Foliage Bug Fix: Fixed cook non-determinism in UFoliageTypes. Bug Fix: Fixed foliage non-deterministic cooking by removing unused UProceduralFoliageSpawner::bNeedsSimulation that can change depending on the cooking order. New: Fixed HISM Cluster Tree display not taking into account actor's transform (reported in a UDN). Landscape Bug Fix: Fixed a crash that occurred after moving a Landscape Actor to another Level. Bug Fix: Fixed a bug that occurred for NaN in Landscape when zero scale is set. Also clamped bounds to avoid hitting a check. Bug Fix: Landscape now obeys the Render in Main Pass setting correctly. Bug Fix: Previously, there was a problem with bad index buffer generation on mobile landscape. Vertex Key generation was not unique for a 255*255 grid. Removed this, and replaced it with a simpler array lookup. Also removed its use as a TMap key. Bug Fix: There was a copy-paste typo that broke landscape tessellation in 4.24. This is now fixed. Bug Fix: Fixed a crash occurring when using RVT and tessellation together in a landscape material. Bug Fix: Fixed an issue with FTriangleRasterizer DrawTriangle division by zero case. Bug Fix: Fixed a SimpleCollision null DominantLayer crash. Bug Fix: Reimport Tiled Landscape Layer Weightmaps now has the same behavior as Import (no more forcing a normalization of layer weightmaps). Bug Fix: Fixed an issue with landscape simple collision returning incorrect physical material. Bug Fix: Fixed a crash that occurred when entering Landscape Mode with Mirror Tool enabled and mirror point set to 0,0 but without any landscape. Bug Fix: Fixed lost landscape data when opening/making visible a level with non-edit layers data only while the main landscape level has been converted to edit layers : moved copy from old data to layers in PostRegisterAllComponents, which is executed upon making the level visible. Also fixed the logic for detecting data conversion is needed and fixed a crash when cleaning up weightmap layer allocations. Bug Fix: Fixed a crash that occurred when creating a landscape from scratch with edit layers enabled. Bug Fix: Fixed some places in landscape code where the editor LOD override is not respected. Bug Fix: Fixed a crash that occurred when creating a layer info on a list of layers with none assigned yet. Bug Fix: Fixed an issue with ALandscape::PostEditChangeProperty not properly handling transform changes. Fixed an issue where ALandscapeProxy::PostEditChangeProperty was not properly handling Z-scale changes. Bug Fix: Fixed an issue with the update of Physical Material not being propagated properly. Bug Fix: Fixed bad index buffer generation on mobile landscape with subsection vertex count of 256x256. Bug Fix: Fixed an issue with flickering landscape when using edit layers in D3D12. Bug Fix: Landscape Visibility Painting: Fixed the tool so Visibility can be painted even if some Landscape actors do not have a Hole Material set (check per component). Bug Fix: Fixed a crash occurring with landscape grass mesh when there are more mips than the landscape currently has. Bug Fix: Fixed Add Ramp landscape tool debug display when landscape is rotated. Bug Fix: Fixed a crash occurring when unloading a level with a landscape actor being displayed in the details view and both the details and level tools on-screen. Bug Fix: We now support use of grayscale textures for landscape image brushes. Bug Fix: Fixed an infinite loop that occurred when undoing a landscape change. Bug Fix: Fixed a crash that occurred when importing Landscape layers in the New Landscape Tool. Bug Fix: Fixed an issue with landscape component ForcedLOD option. New: Changed display name of Landscape spline control point's "Width" parameter to "Half-Width" so that it reflects what it really controls. New: Removed loose global parameters from landscape vertex factory. New: Landscape Layers: UI Reverse order of Layers. New: Grass map rendering now uses the fixed grid vertex factory. This keeps it independent of view LOD. New: Modify default for Landscape VirtualTextureNumLods from 0 to 6 This new value reduces vertex interpolation artifacts when writing to a virtual texture at the cost of increased vertex count in the runtime virtual texture page render pass. New: Landscape Layers: Collapse Feature. New: Reimport weightmap paths are now stored per tile when importing a tiled landscape. New: Runtime virtual texture world height packing now depends on the volume extents to give optimum precision for each volume. Material Editor Bug Fix: Added additional validity checks before pinning a weak pointer. Bug Fix: Fixed back facing plane in default camera view in Material Editor. Bug Fix: The Defer Compression default arrow is hidden after compressing a texture. Bug Fix: Fixed an issue where pin names on SetMaterialAttributes node would not match Material input names. Bug Fix: Added a null check when copy/pasting Material nodes. This prevents a rare crash, but it could cause nodes to be pasted improperly when the same crash conditions are met. Bug Fix: Removed MaterialFunction expressions from the list of types that allow parameter conversion. Bug Fix: Fixed an issue where grid lines appeared not to draw in the Material Editor on initial launch. Bug Fix: Updated material editor to save and restore metadata when updating materials. Bug Fix: Fixed the Material Instance Editor to properly respect Undo. Bug Fix: When deciding what to recompile after changing an MF, look at all Material Instances instead of stopping after the first match. Bug Fix: Fixed an issue with layer parameters being inherited from one MIC to another MIC. Bug Fix: Set the stats message log to use a fixed size font, and removed tooltip text that was duplicated. Bug Fix: Channel names for the channel mask parameter are no longer allowed. Bug Fix: The Material layer Parameter Preview panel will no longer crash in the Material Editor. Bug Fix: Added more validity checks to LinkMaterialExpressionsFromGraph. Bug Fix: The visibility of the Material layer parameter is now correctly impacted by static switches. New: Added a GetNumberMaterials function to UEditorStaticMeshLibrary for getting the count of material slots on a static mesh. New: Now Material Editor propagates changes made to layers in the base material to derived Material Instances. New: Removed an unneeded Realtime checkbox on the UI for material viewport. New: Added Physical Material Masks to Materials, which are used to associate multiple physical Materials with a single Material based on a mask. This is only supported when Chaos physics is enabled. In the Material, set the Physical Material Mask to a mask and Physical Material Map to an array of physical materials. In the Static Mesh properties, Support Physical Material Masks must be enabled and additional data will be stored at runtime. Media Framework Bug Fix: Added code to WmfMediaPlayer to avoid a deadlock during shutdown between a tickable thread asking for player time and the player actually shutting down. New: Introduced a tighter active period for FMediaTicker thread and a new optional feature that destroys players after closing. To use this feature, players must implement GetPlayerFeatureFlag and have the function return True when AllowShutdownOnClose is passed in. Static Mesh Editor Bug Fix: Fixed the Reset Camera toolbar button in the Static Mesh Editor. Bug Fix: In the Static Mesh Editor, fixed a bug causing the lightmap coordinate index to be stuck at 0 when there were generated LODs in the Static Mesh. Bug Fix: Fixed a bug that could sometimes crash the Editor when trying to visualize convex collision in the Static Mesh Editor. Bug Fix: In the Static Mesh Editor, the "Remove UV Channel" option now supports Undo/Redo. Bug Fix: Fixed a bug that sometimes crashed the Editor when modifying values in the Static Mesh Editor. Bug Fix: Fixed a crash that sometimes occurred when opening the Static Mesh Editor. Bug Fix: Fixed a bug that could prevent the user from using the LOD picker after the number of LODs was changed in the Static Mesh Editor. New: Static Mesh Editor no longer triggers a lighting rebuild while interactively changing the lightmap properties. Scripting Bug Fix: Ensured that start-up scripts run when using Python in commandlet mode. Bug Fix: Ensured that Python-wrapped objects have a valid internal state before attempting to nativize them. Bug Fix: PySlate now checks that Slate is available rather than crashing. Bug Fix: Emit RuntimeError (rather than a generic Exception) in Python for Blueprint execution errors. Bug Fix: The OnPythonInitialized notification is now deferred until after start-up scripts have run. Bug Fix: Fixed const-ref parameters passed to Python overridden functions losing their value. Bug Fix: The Alembic Importer now uses the import settings provided by the user script. New: Added a way to force a change notify when using SetEditorProperty in Python or Editor Utilities, even if the value is unchanged: This may be needed if you've edited data on an instance indirectly (eg, modifying an array reference on the object), and need to force a change notification to update some dependent data. In Python, this can be done by passing unreal.PropertyAccessChangeNotifyMode.ALWAYS as the notify_mode argument to set_editor_property. In Editor Utilities, there is a new advanced pin available for setting the "Change Notify Mode". In both cases the default method is to notify only when the value actually changes. New: Exposed the Get/SetEditorProperty functions to Blueprints. New: Exposed enum asset types to Python: They must be accessed via unreal.get_blueprint_generated_types(...) as asset names are not guaranteed to be unique, so they're not added to the unreal module. Once a type has been generated for Python, it will be updated if the underlying asset is changed. Should the asset be deleted then the Python type will be reset to clean state and become mostly unusable. This change also ensures that the Unreal type pointers referenced by the Python are referenced for garbage collection correctly. This will cause a warning if you try and delete an asset with a type used by Python. New: Added a Blueprint node for calling Python with arguments: This allows people to create a Blueprint node that can define a blob of literal Python script, along with wildcard inputs and outputs to be used by the script itself. The node takes care of marshalling data in and out of the Python script, and allows custom Python to be invoked from Blueprints. New: Added set_editor_properties to let you set multiple properties but only emit a single pre/post change notification. New: Marked LevelSequenceFactoryNew as BlueprintType, so Editor utility blueprints/widgets can create Level Sequence Assets. New: Added support to script re-imports of FBX animation sequences. New: Added support to script re-imports of FBX animation sequences. New: Added PropertyAccessUtil to contain the common logic of getting and setting properties in a way that emits change notifications. The existing Python code has been ported to use this, and this can also be used by Blueprints (or other C++ setting properties) to allow property changes that emit change notifications. New: Improved the error message occurring when attempting to create a container property in Python that directly nests another container. This is not supported by the reflection system and is already disallowed, but the error message was confusing. It now gives an error message more like the Unreal Header Tool, and suggests using a struct as an alternative to direct nesting. New: Improved the error message occurring when creating a Python property directly from a container type. It is a common mistake to omit the type or types for the container elements, so this case will now present an error stating the reason and the correct fix. Sequencer Crash Fix: Fixed a crash that occurred when displaying keys in the cinematic viewport with a large view range. Crash Fix: Sequencer now closes if any of the sequences were reloaded/replaced. This fixes a crash that could occur if subsequences were reloaded while viewing the Master Sequence. Crash Fix: Resolved a crash occurring when using "Store Curves" in Curve Editor. Crash Fix: Creating an actor with a duplicate name no longer results in crashing Editor. Crash Fix: Accessing a child folder after postloading a movie scene folders no longer crashes Editor. Bug Fix: Fixed a Return Early event triggering incorrectly when and event is specified by an object binding, but there aren't any bound objects (ie. spawnable hasn't been spawned). Bug Fix: You can now visit key areas from track, key area, and category nodes only. This prevents selecting keys when object binding nodes are collapsed. Bug Fix: Fixed an issue with transform baking for rig rails. The current position on rail is updated per tick, so the actor needs to be ticked on each baked frame. Bug Fix: Added protection against assigning a sequence that would produce circular dependencies. For example, if the current level sequence is named NewLevelSequence, this fix prevents the user from naming a subsection the same thing (NewLevelSequence, in this example). Bug Fix: Autoscrub now only occurs if you are in a scrubbing state. This fixed double evaluations occurring when autoscroll is enabled during play. Bug Fix: Setting the end range is now exclusive instead of open. This fixes an issue where if you tried to set the end range through the Properties menu, it would be inconsistent with dragging the end range. Bug Fix: Added reentrant guards to prevent starting and stopping recordings while recording is in progress with Take Recorder. Bug Fix: AddGivenTrack now allows for tracks with the same name, and the duplicate check is only done on copy/paste. This resolved an issue that occurred with a previous bug fix. Bug Fix: Fixed an issue with adjusting the animation start time during resizing, and the start time is now clamped to the beginning of the clip. Bug Fix: Viewports are now redrawn after restoring state on close(). This fixes a bug where if Realtime is not enabled in the viewport, closing Sequencer leaves objects drawn in a Sequencer animated state before the next redraw. Bug Fix: A spawnable's object template is now copied so that it is not limited to the given transient outer. This fixes a bug where the DefaultSceneRoot for an empty actor would be lost on a copy/paste. Bug Fix: Fixed the "Transform Selected Sections" tool so that now sections and their keys are transformed. Bug Fix: Expanded the track node if row indices were changed. This fixes an issue where if a multi-row track is regenerated, it shows the multiple rows rather than displaying collapsed rows. Bug Fix: Fixed upper playback range for subframes so that keys that lie beyond the integer frame are evaluated. For example, [400, 800.5] now returns an upper bound frame of 801 so that a key at 800 will be evaluated. This includes tests for frame number range. Bug Fix: Fixed spawnable particles not firing by setting force particles as inactive on spawn by default, as opposed to "not autoactivate". Bug Fix: Fixed subframe end playback range conditions so the playback range excludes flooring and tests the stop condition with GetLastValidTime() rather than with DurationFrames. Bug Fix: Sequencer now only restores pre-animated states on actors that are being removed from the binding. Otherwise, all actors would be restored unnecessarily. Bug Fix: Bake transform now refreshes the skeletal mesh the object might be attached to (just like export fbx performs updates). Bug Fix: Added check for bound component when exporting 3D transform data to properly support applying rotations when exporting camera components in a Blueprint. Bug Fix: Fixed an issue where the name check allowed more characters than the limit when renaming sequencer folders and object bindings. Bug Fix: Return early in GoToEndAndStop now triggers correctly if the sequence player is already stopped and at the end of the sequence. Bug Fix: Fixed an issue where changing levels with ClientTravel was breaking the ability to activate a sequence when using Play In Editor. Bug Fix: Fixed Mute/Solo incorrectly applying to parent binding instead of selected track. Bug Fix: Spawnable objects no longer fall in streamed levels during Play in Editor. Bug Fix: Fixes several usability issues when dragging tracks or sections between rows. Bug Fix: Fixed many special cases when using text search. Bug Fix: ProRes/DNX now defers file creation until after the sequence updates. This allows file format tokens like {sequence} to be replaced with the name of the rendered sequence. Bug Fix: Thumbnails no longer display over blending areas of the Camera Cut track. Bug Fix: Camera bindings now correctly auto-create camera cuts. Bug Fix: Fixed a zooming issue with mouse wheel in CurveEditor while panning with mouse dragging. Bug Fix: Adding a key to a Blueprint component now correctly uses the existing value of the component. Bug Fix: The geometry cache now evaluates correctly from Sequencer when the cache is not visible. Bug Fix: Fixed sections not being visible on some pinned tracks. Bug Fix: Dropping assets onto an object binding now allows the drop to occur below so an asset can be moved to last in the tree. Bug Fix: When building a menu in Sequencer, the common base class now correctly passes to the selected object bindings. Bug Fix: Made several fixes to prevent reentrant play/evaluation during stop. Bug Fix: Sequencer now forcibly ticks if the animation pose has already ticked this frame. This fixes first frame discrepancies if a sequence is played after actors have ticked on a particular frame. Bug Fix: The active animation instance on a skeletal animation component now correctly clears when stopping a sequence. This results in the anim-notifies stopping as expected. Bug Fix: Sequencer now only renders the waveform preview for assets that are currently visible on an audio track section. Bug Fix: BeginSlipSection now calls correctly in Sequencer. Bug Fix: GetUniqueName now returns correctly if the name is already unique. Bug Fix: The Local Time is now set as Looped by default so the evaluation is Play To instead of Jump when autoscrolling. Bug Fix: Fixed an issue where the sequence was not playing again if the world is unloaded and reloaded. Bug Fix: Section easing now updates the section correctly when it is moved or resized through property editing. Bug Fix: Loop markers in audio tracks now render correctly. Bug Fix: The "Add a Camera" button now always displays on the transform track's object binding edit button. This fixes an issue where looking at the spawnable template isn't enough to determine whether a camera exists because the camera component might not exist at the time the button is created. Bug Fix: Resolved an issue that allowed cyclic inclusions within template sequences. Bug Fix: Replicated sequences now increase their sync threshold by client ping. You can verify hitch correction through the "Correcting de-synced play position for sequence..." message in the log. For more information more often, enable "log LogMovieSceneRepl VeryVerbose". Bug Fix: Sequences not considering TimeScale or DemoPlayTimeDilation during playback now functions correctly. Bug Fix: Implemented manual checks for broken tangents on FBX imports to avoid incorrectly passing broken tangents through the import. Bug Fix: Track count display on sub-sequences now functions correctly. Bug Fix: Copying and pasting tracks with multiple subtracks now functions correctly. Bug Fix: Exporting a transform in FBX now correctly exports the combined, whole blended track, instead of just exporting the first section. Bug Fix: Custom Cinematic Camera film back text now correctly receives values for focal length and aperture. Bug Fix: The Director Blueprint is now renamed whenever a Level Sequence is renamed. This resolves an issue where Event Tracks stopped working when Level Sequences were renamed. Bug Fix: Set curve attributes now operate on all curves, rather than just the selection in Curve Editor. Bug Fix: transient flags on spawnable template components now clear correctly. This resolves a bug where components added to an empty actor were not saved. Bug Fix: Prevented a crash that occurred when loading a sequence that has broken data. Bug Fix: UnfocusedVolumeMultiplier should be fixed at 1.0f during movie rendering so that audio doesn't cut out during movie rendering. Bug Fix: Curve Editor: Fixed an issue where the Filter window was not opening for standalone curves; this was caused because no tab manager was provided when creating the curve asset editor's SCurveEditorPanel in FCurveAssetEditor::SpawnTab_CurveAsset, which is needed to create the Filter Window. Bug Fix: Sequencer: If you change a skeleton on a skeletal mesh while its sequence is playing, and then do a restore state the Editor could crash. This is because the Anim Instance is referencing an old skeleton which has a different number of bones. Now Sequencer uses InitalizeAnimScriptInstance so bones/morphs/curves get recalculated. Bug Fix: Sequencer: Import onto objects with the focused template ID. This fixes an issue preventing importing onto an object in a subsequence. Bug Fix: Timecode: Fix crash in details customization when data is empty. New: Added "Time Warping" as a new form of time transformation when looping sequences. This enables you to transform time from a root sequence into a looping sequence and, assuming you know which loop number you are in, you can convert from a local/sub-sequence time back into a root time. New: Sequencer and the sequence API compiler now deal with time transforms that might include looping. The API is mostly backwards compatible, except for inverting a time transform. New: Exporting an FBX through Python now creates spawnables for scripted Sequences. New: Added a PostProcessingMaterial command line argument to specify the post processing material to use for custom render passes in Movie Scene Capture. New: Added a new API for customizing track in/out easing. New: Updated the digits type interface for number padding (such as 02, 002) in Take Recorder. New: The Sequencer clock now uses a global timecode based on the timecode provider, and is now called the Relative Timecode, instead of Timecode. New: The Sub-Sequence and Cinematic Shot tracks now display looping indicators. New: Added a module operator to frame times. New: Added support to search channel and folder names, in addition to track names. New: Keys and channels in Curve Editor can now be copy/pasted. New: Updated Sequencer scripting options to include SetReadyOnly and IsReadOnly to the Level Sequence. New: Updated Live Link log files to reflect updated fps options, with the default setting at 30 fps. You can find this information in the VeryVerbose log file. New: Live Link recordings now discard samples if the sample is outside the start frame. New: Updated Sequencer's scripting to create bindings in the root or local space. New: When you duplicate a possessable object binding, it now duplicates the possessed objects into the current level and assigns them to the duplicated binding. New: Sequences can now be run in Clean Playback Mode, which toggles the game view and hides the viewport UI when running a sequence in-game. Clean Playback Mode automatically defaults to On. New: You can now designate the tooltip display to include the section title, and is useful for when the width of the section is smaller than the section title. This adds tooltips for sections like Shots, Animations, Camera Cuts, and so on. New: Added {date} {month} {day} {year} {time} format strings for Movie Scene Capture. New: Added GetBoundObjects(), which allows access to spawnables through Python scripting by evaluating a frame range. New: Sequencer Audio templates now check for a 0.2s desync (cvar Sequencer.Audio.MaxDesyncTolerance) between expected audio time and actual audio time. This makes it easier to detect the audio play duration after starting Play, particularly when replaying, rewinding, and fastforwarding sequences. New: The Take Number now defaults to 2 digits for better readability. Take Recorder also uses this Sequencer Project Setting when creating filenames and Take metadata. New: In Curve Editor, you can now save loaded curves to the Level Sequence. New: You can now specify an external clock source to interface with Sequencer. This is supported by a new media component that controls the media player and texture to support multiplayer PIE through real-time playback rather than offline, frame-accurate playback. New: Added cvar for controlling net playback synchronization threshold. Sequencer.NetSyncThreshold defaults to 200ms and defines the point at which desync is deemed unacceptable and a force sync is required. New: Take Recorder: Fixed an issue with recording spawned particle systems. Particle systems are spawned and attached to the World Settings Actor. These were not getting picked up when gathering newly created components, because they are scene components on the Actor and not the root component. Also, fixed an issue where the property map needs to be regenerated when adding the world settings actor as a new actor source during PreRecording. New: Renamed Play Rate track to Time Dilation track. New: For command line rendering, the CaptureGammut command line argument now parses names of enum entries as well as numbers. Improvement: ImageWriteTask now uses a TUniqueFunction instead of TFunction to store PixelPreProcessors. This allows storing unique pointers in the PixelPreProcessor tasks. Improvement: Changed the following Curve Editor hotkeys to remove conflicts with other shortcuts in Editor: Transform: Ctrl-T Retime: Ctrl-E Multi: Ctrl-M Improvement: Updated Sequencer to use other minor behaviors in Editor, such as drag/drop and layout/access of menus and toolbars. Improvement: The TRange LexToString now uses "," to split the upper/lower bounds instead of "-" to avoid confusion with negative upper bound values. Improvement: Updated the Cinematic Camera's ratios: Changed 16:9 Digital Film SensorWidth and SensorHeight to 23.76 x 13.365 to match Arri Alexa, and changed 16:9 Film SensorWidth and SensorHeight to 24.00 x 13.5. Improvement: There are several updates to the Sequencer toolbar: Added Save As to the Toolbar Added an Actions menu for operations that manipulate data or selection; Renamed General Options to View Options; Moved Auto Scroll and Show Range Slider to View Options; Removed Go to Time from the menu (it is already in the UI); Removed Show in Content Browser from the menu (it is already in the top Toolbar). Improvement: In Take Recorder, the Animation Asset Name in the Project Settings now defaults to {actor}{slate}{take}. Improvement: Updated Take Recorder settings to the selected Actor is always listed first. Improvement: The Sequence Level Actor now supports RewindForReplay, which stops a sequence when seeking starts in a replay. Improvement: Modified Sequencer network playback so it takes the ping of a player into account when jumping forward in time. This prevents players' systems from constantly needing to resynchronize with the server when playing under high latency situations (resynchronizing with the server causes audio and visual hitches). Improvement: Locked sequences now display a visual indicator with a red border and dims display node text. Improvement: Added tick marks in the audio track display to show audio loop points. Improvement: Improved processing frame rate when outputting video frames as PNG. Improvement: Added keys to the snap field when the user is resizing sections so that resizing sections can snap to keys. Improvement: Added channel extensions for MovieSceneObjectPath. Improvement: Updated marked frames so they are based on the hit index, rather than finding it by the FFrameNumber. Renamed "Clear Marked Frame" to "Delete Marked Frame". Improvement: Dragging and dropping sequences into the tree view now moves items to the root. Improvement: You can now save animations in a Skeletal Mesh track as an AnimSequence. This is also supported by Python. Improvement: You can now change the min/max playback, view, and working values outside of the current range. Improvement: Updated the Cinematic Camera so the Depth of Field method is now Do Not Override to more accurately describe the functionality, as opposed to None. You can also Disable the depth of field completely by overriding the post process settings. Improvement: Sequencer now saves times with FMovieSceneFloatChannel through a serialized buffer, which will significantly speed up Live Link data orders. Deprecated: NewTake in Take Recorder is now deprecated in favor of ClearPendingTake. This lets users simply clear the pending level sequence rather than also clearing out the sources. Removed: Removed the Label Browser option because it was not used. VR-Editor Bug Fix: Fix for crash on exiting VR mode from the taskbar. Bug Fix: Adding a check that we are using the default interactors before warning about legacy UI. This prevents unneeded warnings for Virtual Production. Bug Fix: Fixed a bug causing smooth scrolling with the HTC Vive touch pad in the Unreal Editor VR Mode to be less than smooth, and somewhat erratic. Bug Fix: Fix out-of-date view uniform buffer for selection rendering post-process (broken selection highlights in VRMode). Bug Fix: Changed "Toggle VR" hotkey to add 'Shift' modifier (now Shift+V). Bug Fix: Removed direct null of slate widgets on the VR Editor floating UI because the UI is in the middle of shutting down. New: VR Editor no longer allows transitioning to Play in Editor from Simulate in VR mode. New: Prevent force deleting assets in the Content Browser if VR mode is active. Force deleting is an intensive task that would negatively impact the user experience in VR. World Browser Bug Fix:** Move WorldTileInfo from old to new package when renaming in the editor: this makes sure to propagate WorldComposition settings for this level. Bug Fix:** Prevent double selection notifications when selecting via the Scene Outliner. Gameplay Framework Crash Fix: Fixed a crash in FStreamableManager::OnPreGarbageCollect that happened when deletion of one FStreamable caused other FStreamables to be removed from the manager. Crash Fix: Fixed a crash that happened when passing null to SaveGameToMemory. AsyncSaveGameToSlot now properly handles SaveGameToMemory returning false. Crash Fix: Fixed a crash when adding a gameplay tag without a valid tag source selection. Crash Fix: Fixed a crash related to party members not being local players. Crash Fix: Fixed warning messages and a potential crash when adding a component to an instance in the world if the default mobility of the new component was incompatible with the component it would be attached to. Crash Fix: Removed a few ways for attackers to crash a server through the ability system. Crash Fix: We now make sure we have a GamplayEffect definition before checking tag requirements. Bug Fix: UCameraComponent::OnRegister no longer creates editor-only components when running a commandlet. Bug Fix: The Default Physics Volume is now marked as transient to avoid a failure in CheckForWorldGCLeaks. Because it is transient, it will not be saved or loaded. Bug Fix: Actors that have pending latent actions will no longer be automatically destroyed if bAutoDestroyWhenFinished is true. Bug Fix: Local variables in the user construction script now show up in the details panel. Bug Fix: Fixed the oscillation blend out time for camera shakes. Previously, the blend out's OscillationDuration was not used in the calculation, which led to the camera shake ending earlier than it should have. Bug Fix: Fixed an issue with gameplay tag categories not applying to function parameters in Blueprints if they were part of a function terminator node. Bug Fix: TActorIterator and GetAllActorsOfClass will no longer return actors from a level that is in the process of being incrementally removed from world, unless you are in the RemoveFromWorld scope for that level. Bug Fix: UWorld::Async* trace functions will now assert when run outside of the game thread, as this will result in memory corruption. Bug Fix: Fixed incorrectly set social user on a party member when we have multiple local players. Bug Fix: The InvertedAxis array in the input system no longer grows unbounded. Bug Fix: Fixed an issue with gameplay effects' tags not being replicated with multiple viewports. Bug Fix: Fixed an ensure that could occur when a streaming level was removed as a side-effect of updating the streaming state of another streaming level. Bug Fix: Child actors spawned by child actor components now correctly update their positions when moving the component on a per-instance basis. Bug Fix: The split screen game view port client now has higher precision float values when creating the split screen info in order to remove unwanted black bars at higher resolutions. Bug Fix: "ke " console commands no longer attempt to execute commands on Archetype Objects which belong to CDOs. CDOs were already excluded from "ke " commands, but not default subobjects created by a CDO. Bug Fix: The engine no longer attempts to get the NetMode on server worlds before the world is fully set up. Bug Fix: Fixed cooking to remove non-determinism in FStaticMeshLODResources. Bug Fix: Fixed a bug where a gameplay ability spec could be invalidated by the InternalTryActivateAbility function while looping through triggered abilities. Bug Fix: Instance components are no longer lost after undoing an apply to Blueprint transaction. Bug Fix: Children of added instanced components now appear in the details panel without requiring the user to deselect and reselect the actor. Bug Fix: Child actors no longer lose properties set in the parent construction script when changing properties on the child actor component instance. Bug Fix: Fixed crouching clients observed from a listen server popping up briefly before interpolation corrects the location down. Applied the same fix as for simulated proxies on the listen server. Bug Fix: Added smoothing to replicated server world time delta This ensures that spikes in the perceived server world time caused by delayed replication updates are not directly and immediately reflected in the client calls to AGameStateBase::GetServerWorldTimeSeconds() which would be problematic for systems that want to work relative to server world time. Bug Fix: Prevented NaNs (or excessively large input values causing later NaNs) from entering AController::SetControlLocation. Bug Fix: Filled in ServerLastTransformUpdateTimeStamp for simulated proxies with the value of ReplicatedServerLastTransformUpdateTimeStamp, for code paths that may try to read that value instead. Added client and server timestamps to "p.NetVisualizeSimulatedCorrections" output. Bug Fix: We now call UpdateCharacterStateBeforeMovement and UpdateCharacterStateAfterMovement for simulated proxies during simulation. However, the base implementations avoid the crouch state changes in those functions because those are replicated from the server. This is more for overrides to be able to run custom behavior to match the server simulation. Bug Fix: Changed how we handle updating gameplay tags inside of tag count containers. When deferring the update of parent tags while removing gameplay tags, we will now call the change-related delegates after the parent tags have updated. This ensures that the tag table is in a consistent state when the delegates broadcast. Bug Fix: Added location and current client base (if any) to logging in ClientAdjustPosition if the client does not resolve the movement base. Bug Fix: Fixed cases where FMod and VectorMod would fail to return a result within the expected output range for very large input values. This could in turn result in NaN/Inf propagation if for example those values are used with SinCos during Rotator/Vector/Quat conversion. Added some additional ensures during development to catch similar overflow and uninitialized variable issues sooner. Bug Fix: Fixed a bug that caused some per-class properties to read incorrect data in the editor. Bug Fix: We now ensure that per-class properties, if they exist, will be available as Blueprint variable get nodes. Bug Fix: Fixed actor names of user construction script child actors not reflecting the child actor class if the child actor class is changed on the add component node. Bug Fix: Properties inside sparse class data now respect the AssetRegistrySearchable flag. Bug Fix: Rotating movement component now works as expected in standalone. Bug Fix: Modified SetBaseAttributeValueFromReplication to take the old value as an input parameter. Previously, it was reading the attribute value to try to get the old value. However, if called from a replication function, the old value had already been discarded before reaching SetBaseAttributeValueFromReplication so we'd get the new value instead. Bug Fix: We now make a copy of the spawned target actor array before iterating over it inside when confirming targets because some callbacks may modify the array. Bug Fix: Fixed a bug that was causing properties that were marked as asset registry searchable not to be included with asset data in the asset registry. Bug Fix: Fixed a bug where stacking GamplayEffects that did not reset the duration on additional instances of the effect being applied and with set by caller durations would only have the duration correctly set for the first instance on the stack. All other GE specs in the stack would have a duration of 1 second. Added automation tests to detect this case. Bug Fix: Fixed a regression causing actor overlaps to not properly trigger during initial level load even if bGenerateOverlapEventsDuringLevelStreaming was enabled. Fixed some incorrect related comments in the header. Bug Fix: Fixed a bug that could occur if handling gameplay event delegates modified the list of gameplay event delegates. Bug Fix: Fixed a bug causing GiveAbilityAndActivateOnce to behave inconsistently. Bug Fix: Reordered some operations inside FGameplayEffectSpec::Initialize to deal with a potential ordering dependency. New: SpawnActor can now accept a requested name in its SpawnActorParameters. If the name is unavailable, SpawnActor can return null, report an error, or generate a unique name that is available. New: Added the option to include components from child actors when hiding/showing components in SceneCaptureComponent. New: UChildActorComponent::SetChildClass can now specify the template to use instead of the default instance of the desired class. If used on an archetype, the stored template will be a copy of the supplied template. If used on an instance in the world, this will result in a one-time spawn of a child actor using that template, but if the child actor is destroyed and recreated, it will use the original template again. New: Added a version of GetManagedObject to SignificanceManager that is const and accepts const parameters. New: UGameplayAbility now has an OnRemoveAbility function. It follows the same pattern as OnGiveAbility and is only called on the primary instance of the ability or the class default object. New: Camera shakes now have a new actor/component type which defines a "shake source", along with some attenuation settings. Some new Blueprint functions enable users to start camera shakes that are "anchored" to those sources, and whose shake intensities will change dynamically based on the current camera's distance to the respective sources. New: Added the GetActiveCameraShakes function, which gets the list of currently playing camera shakes. New: When displaying blocked ability tags, the debug text now includes the total number of blocked tags. New: Added two broadcast delegates to GameViewportClient. These events listen for UGameViewportClient::InputKey and UGameViewportClient::InputAxis events, respectively. You can access references to these with the OnInputKey and OnInputAxis functions. New: Blueprint creation from the selected in a level is now presented through a modal dialog box that allows specification of the Blueprint name, path, creation mode, and parent class. The available creation modes are via Harvesting Components (previously available from the Blueprints main tool bar dropdown), Subclassing (previously available from the details panel when a single Actor selected), and Child Actor (a new mode that creates an Actor Blueprint with a child component for each selected Actor). New: When harvesting components from multiple actors, the Actor name will now be included in the harvested component name. New: Renamed UAbilitySystemComponent::InternalServerTryActiveAbility to UAbilitySystemComponent::InternalServerTryActivateAbility.Code that was calling InternalServerTryActiveAbility should now call InternalServerTryActivateAbility. New: Improved data validation for sparse class data types. UHT now checks for the following: Max of one sparse class data struct per class Sparse class data struct must inherit from the parent class' sparse class data struct All sparse properties must be BlueprintReadOnly and EditDefaultsOnly Blueprint-assignable delegates cannot be sparse properties New: Continue to use the filter text for displaying gameplay tags when a tag is added or deleted. The previous behaviour cleared the filter. New: Don't reset the tag source when we add a new tag in the editor. New: Added the ability to query an ability system component for all active gameplay effects that have a specified set of tags. The new function is called GetActiveEffectsWithAllTags and can be accessed through code or Blueprints. New: When root motion movement related ability tasks end they now return the movement component's movement mode to the movement mode it was in before the task started. New: Made SpawnedAttributes transient so it won't save data that can become stale and incorrect. Added null checks to prevent any currently saved stale data from propagating. This prevents problems related to bad data getting stored in SpawnedAttributes. API Change: AddDefaultSubobjectSet has been deprecated. AddAttributeSetSubobject should be used instead. Localization Bug Fix: String Table find/load is now deferred until the internationalization system is initialized. Bug Fix: Correctly updated in-memory String Table asset references when a String Table asset is renamed. Bug Fix: Translation Editor now shows stale translations as "Needs Review" rather than "Untranslated". Bug Fix: Fixed an issue where Blueprint components instanced into a Level could lose their localization data. New: Exposed "exclude classes" localization gather option to the Localization Dashboard. New: Added support for splitting localization data into separate PAK chunks during cooking. There is a new packaging setting, LocalizationTargetsToChunk, which lets you specify which of your localization targets should be chunked during cooking. If enabled, any localization entries corresponding to assets will be split into a separate LocRes file for the chunk containing that asset, and will be removed from any chunks (including the primary chunk) that do not contain that asset. This is useful to restrict localized text to the chunk that contains the corresponding asset for the text, and allows localization data to be selectively encrypted based on the chunking rules. At runtime this chunked data will be loaded into the text localization manager automatically when the chunk is loaded. New: Added the ability to filter the metadata gathered by field and field owner types. This allows you to do things like "gather all properties belonging to actors", or "gather everything but functions". Networking Bug Fix: Added support for IsEncryptionEnabled for child connection objects. Bug Fix: Modified the network driver's accumulated time to use double precision. This changes the format of the stateless connection handshake packet used to negotiate a client to server connection. Bug Fix: Added support for non-default allocator in the TArray passed to SafeNetSerializeTArray utility functions. Incoming array size is clamped if it is too large. Issues with bOutSuccess and return values have been fixed. Bug Fix: Fixed a potential underflow error in the AES packet handlers. Bug Fix: Fixed infinite recursion in UIpConnection::HandleSocketSendResult when SocketSend failure occurs with a PendingNetGame. Bug Fix: Only limit the net tick rate if it's lower than the engine tick rate. Bug Fix: Changed UActorChannel::ProcessQueuedBunches to better respect timeslicing. This addresses issues with over-logging warnings regardless of whether or not the channel was simply ignored for processing. Bug Fix: Fixed an edge case in PackageMapClient where the NetGuidCache could return a partially loaded Package. Bug Fix: ServerFrameTime sends the current frame's time instead of the previous frame. PacketInfo payload for ServerFrameTime and Jitter will now only be sent on the first packet of the frame. This reduces overhead when sending large data chunks in a single frame. New: Full address protocol resolution and round robin connection attempts are now made in the IpNetDriver/IpConnection classes automatically. This enables connecting over both IPv6 and IPv4 networks, and takes advantage of platforms that can use both in order to find the best method to connect to a server. Platforms where this behavior is not desired must call DisableAddressResolution either in their connection class constructor or in InitLocalConnection and InitRemoteConnection before calling any base class functionality. This will make sure that resolution is properly disabled for inherited IpConnection classes. For testing, users can take advantage of CVARs such as net.DebugAppendResolverAddress which will always add the value to any address resolution done and net.IpConnectionDisableResolution which will disable address resolution on any future connections made. New: Modified actor tear off to notify all active network drivers when it occurs on the client or server. New: Added the ability to swap local and remote roles when replicating actor properties in the network driver and replication graph. New: Optimized handling of initially dormant actors to avoid adding them to the active network object list for a single frame, only to be immediately removed. New: Added an export for FScopedActorRoleSwap for use by other modules. New: NetConnection will now sample packet loss on every StatPeriod and store the current value along with a rolling average in the InPacketsLossPercentage and OutPacketsLossPercentage variables New: IPv6 support is now included by default in all desktop platform builds, but is disabled. Setting the CVAR net.DisableIPv6 to 0 will enable IPv6, provided there is device OS level support. Network addresses should be passed through a resolution method like GetAddressInfo in order to express the address in the best way possible. SetIp and GetAddressFromString are still usable, but may require address translation in usages where connection protocols are not defined anywhere. New: GetLocalBindAddresses can be used to get all the addresses a machine can use to bind a socket to instead of relying on the first result or just returning a blank address. This new function automatically checks for multihome requirements and uses them as appropriate. New: There is a new implementation of GetLocalAdapterAddresses, allowing for all platforms to obtain all of the network addresses tied to the current machine. New: Generate enums with RepIndex values for native Replicated Properties. These can be used with simple macro expansions to get Rep Indices for properties without actually looking up UProperty pointers. New: Specifying -NetTrace=[VerbosityLevel] on the commandline now implicitly enables the Net TraceChannel. New: Added Jitter calculation to NetConnection. Jitter is the difference in latency between every packet. The closer to 0 it is the more stable the latency is. To support this, packet headers now contain local clock data used to calculate jitter, and EEngineNetworkVersionHistory has been increased to account for new jitter clock time in packet headers. This will cause connections to fail between pre/post engine builds. Improvement: Added configurable IMessageBus and Debug Name to Messaging RPC to improve encapsulation. Deprecated: The RemoteSaturation variable has now been deprecated, and the InBytesPerSecond value will no longer be sent in every packet header. Deprecated: NetConnection's BestLag variable is now private since it is just a mirror of AvgLag. Removed: Deleted IPv6 specific classes, as they've been deprecated for several engine versions. Replays Bug Fix: Fixed incorrect use of a non-squared value in AActor::GetReplayPriority. Bug Fix: Fixed a crash while using Play In Editor replays related to replicated level script actors. Bug Fix: Fixed a race condition where replay events that are added immediately after beginning recording would be immediately discarded. Bug Fix: Fixed incorrect scale on replicated level actors after scrubbing a replay. Bug Fix: Fixed several warnings related to level script actors of streaming levels in PIE replays. Bug Fix: Custom Delta Tracking state will not be created when recording or playing back replays. ) New: Removed multiple deprecated replay streamer calls that took user strings. New: Added the ability for games to record arbitrary per-frame data to replays and access this data during playback using FNetworkReplayDelegates::OnWriteGameSpecificFrameData and FNetworkReplayDelegates::OnProcessGameSpecificFrameData. Deprecated: Deprecated UNetConnection's InternalAck flag to eventually make it private. Replication Graph Bug Fix: Fixed debug actor counting towards the bandwidth budget in the replication graph. Bug Fix: Fix ReplicationGraph ignoring traffic for the saturation calculations when ActorDiscoveryMaxBitsPerFrame is set to 0. This caused the ReplicationGraph to send more data than was budgeted by the MaxInternetClientRate. Bug Fix: Upgraded ReplicationPeriodFrame to uint16 to fix an overflow error when the NetUpdateFrequency is very low. Bug Fix: Fixed a bug with implicit cast of ReplicationPeriodFrame from uint32 to uint8. New: Added a flag to FActorDestructionInfo that ignores the distance culling test. Improvement: DestructionInfo is now always sent to connections that are aware of AlwaysRelevant Actors. Improvement: Optimized ReplicateDestructionInfo by not testing the destroyed actors list every frame. Instead, it will test if viewers are near destroyed objects only after they travel far enough from the previously tested location. You can set Net.RepGraph.OutOfRangeDistanceCheckRatio to 0 to disable this optimization and test the destroyed Actor list every frame. Socket Subsystem New: Improving Windows GetLocalAdapterAddresses filtering such that only active, up network adapter addresses are returned. New: Socket subsystem now supports creation of FInternetAddrs pre-set to a specific protocol at allocation. This allows for less ambiguity when obtaining/creating an any/broadcast/loopback address. Bug Fix: Fixed MULTIHOME command line argument in SocketSubsystemUnix. Online Crash Fix: Fixed a crash if a log message is emitted during vivoxclientapi initialization. Bug Fix: Fixed Steam OSS initializing a server when starting a client build. This is an outdated initialization flow that was never meant to be a feature and is no longer necessary to perform at startup. To restore this behavior, set OnlineSubsystemSteam.bInitServerOnClient to true in any Engine*.ini file. Bug Fix: Fixed the error messages in various online latent Blueprint actions to mention the correct action. Bug Fix: Fixed a buffer overrun in FVoiceEngineImpl::ReadLocalVoiceData New: Implemented showing and sending friend messages on Steam. Additionally, implemented the store overlay and functionality for automatically adding a product to the user's cart. Also updated test cases to the existing external UI harness to support the new functionality. New: Modified the approval flow when joining a party to support a group of players joining together. New: In a Pixel Streaming setup, the Signalling Web Server will now attempt to connect to the Matchmaker server at an interval determined by the "matchmakerRetryInterval" setting (the default is five seconds) if the "UseMatchmaker" flag is set to true. This also means that the Signalling Web Server and Matchmaker Server can be started in any order. New: Added the "LastOnline" field to OnlineUserPresence. New: We now provide a copy of the Party Member Data to PostReplication in case validation checks desire to roll back to previous data values. New: Moved NotifyPartyInitialized out of PARTY_SCOPE so game subclasses of SocialManager can override it. New: Added OnlineTracing interface for capturing HTTP/WSS/XMPP traffic. New: VivoxVoiceChat plugin - Exposed rtp timeout settings through RtpConnectTimeoutMs and RtpTimeoutMs in the [VoiceChat.Vivox] section of DefaultEngine.ini. New: Added On(X)Complete stubs in the SocialToolkit and called them from within their respective Handle(X)Complete handlers. The game can now react to these notifications while preserving a robust API. New: Added support for multiple local users to the VivoxVoiceChat Plugin. New: Refactored Leaderboards Interface to admit additional parameters when querying leaderboards. The new possible syntax of the command is: \ ONLINE TEST LEADERBOARDS LeaderboardName SortedColumn NColumnName NColumnFormat ... UserId Improvement: Updated the Online Subsystem to use SteamSDK 1.47. Deprecated: Deprecated Steam voice packet classes. Steam no longer requires any platform-specific functionality as of UE 4.23. BuildPatchServices Bug Fix: BuildPatchServices will no longer call GLog->FlushThreadedLogs() when not on the main thread during directory chunking. Bug Fix: Failing to load a file during chunking no longer causes an assert. Bug Fix: Fixed an issue where a generated optimised delta between two binaries will break an installation if the destination binary is re-processed as different data, replacing the original manifest at the same version string. Bug Fix: MergeManifests preserves the original, newly generated buildid rather than clobbering it with one of the input manifests. Bug Fix: Extended the life of the IOptimisedDelta instances to match the FBuildPatchInstaller to avoid race conditions for destruction that occurred when finishing FBuildPatchInstaller::Initialize. Bug Fix: Removed an unnecessary, explicit delete of resume data when completing an EInstallMode::StageFiles installer. Bug Fix: Fixed a possible shutdown crash when packaging chunks. When processor classes complete quickly, the IBuildManifestSet instance will no longer be deallocated before other systems that are using it. New: Added EFeatureLevel::UsesBuildTimeGeneratedBuildId for storing a build-time generated ID. New: When saving a manifest of EFeatureLevel::UsesBuildTimeGeneratedBuildId or higher, a buildid field is serialised as part of the manifest meta data block. This field is generated as part of the FManifestMeta constructor, and thus uniquely saved upon creation of a new manifest object. New: When loading a manifest object, the buildid will be serialised or runtime generated based on the manifest object version. New: When downloading an optimised delta, we verify for agreeable SHA1 and file list, rejecting with error if a problem is detected. New: BuildPatchTool's ChunkDeltaOptimise mode now chooses the appropriate output FeatureLevel for the patch data, based on the provided source and destination manifest files. New: Exposed the file verification error counter from verify statistics. New: Added the meta folder to staging so that the Install staging folder will only ever contain install files. Moved the $resumeData file to a new location if it is found in the legacy location. New: Refactored FBuildPatchFileConstructor error handling to catch serialisation errors as soon as they occur. API Change: Renamed EFeatureLevel::StoresUniqueBuildId to EFeatureLevel::UsesRuntimeGeneratedBuildId. HTTP Bug Fix: The hotfix manager now informs the HTTP module when any HTTP section is updated (for example, [HTTP.Curl]). Bug Fix: [HttpServer] Request query parameters are now correctly parsed with the "&" delimiter. New: Implemented the seek function in libcurl. Currently, this is only implemented for complete rewinds that happen when retrying connections. Online Subsystem Bug Fix: Fixed get user profile process returning failure on all paths. Fixed soting pointer to local user profile object leaving dangling pointers. Bug Fix: Moved bShowBrowserPointer variable getter before ShowWebPageArg->SetPointerEnabled in order to set this using the boolean as expected. Bug Fix: Fixed an inconsistency with the tests where we returned platform IDs instead of UE4 IDs, then passed them back in to the interface expecting UE4 IDs. Bug Fix: Added propper state tracking for the Async task. Bug Fix: Added error handling for individual achievement-setting tasks and fixed error reporting for batch achievement-writing tasks. Note that these tests are only valid for 2017/title-managed achievement format. 2013/event-based achievements will correctly fail the write tests. Read succeeds. New: Modified the base voice engine implementation to store an instance name and not cache the online subsystem directly. New: OnlineSubsystem's method GetFirstSignedInUser will now return the first fully logged in user it can find. Otherwise, it will fall back to its original behavior and return the first local user. New: Added tests for Store and Purchase OSS interfaces usage: "online sub= test store " - eg. "online sub=live test store 9MWBK8Z14MXD" to get product details for ShooterGame store offer 9MWBK8Z14MXD "online sub= test purchase " - eg. "online sub=live test purchase UNUSED 9MWBK8Z14MXD" to initiate checkout for ShooterGame store offer 9MWBK8Z14MXD New: First pass of changes relating to sessions. Users can host, search for, and then join sessions. API Change: Updated FOnControllerPairingChanged (in OnlineIdentityInterface) to provide the number of controllers now assigned to the New and Previous user in the event. If New or Previous user is null, they will be considered to have 0 controllers. Updated the comment about LocalUserNum to be more clear that it describes a person, not a device. Anyone subscribing to FOnControllerPairingChanged will have to update the signature of their handler to take in the extra information. WebSockets New: Added OnWebSocket, created delegate. Added OnMessageSent delegate. XMPP Crash Fix: Fixed a crash in FStropheWebsocketConnection::OnRawMessage when dealing with multiple XMPP connections. Bug Fix: We now properly clean up closed XMPP connections by calling ProcessPendingRemovals through a new Tick function in XmppModule. New: Added IXmppStanza abstract interface to expose stanza data getters independent of implementation; FStropheStanza now inherits and overrides from it. Added OnXmppConnection created delegate. Added OnStanzaSent and OnStanzaReceived delegates, only implemented in Strophe. Paper2D Crash Fix: Updated Paper2DEditorModule to check the validity of GEditor before registering for OnAssetReimport. This prevents a crash when running a commandlet with IsEditor=false. Bug Fix: The Sprite List window in the Sprite Editor now correctly shows all Sprites that come from the same texture as the one being edited. New: Added support for registration of tutorial assets from plugins. Also moved Paper2D tutorial assets into the Paper2D plugin. Physics Crash Fix: Generating a levelset from an empty triangle mesh no longer crashes the editor. Bug Fix: Resolved a cloth binding issue that occurred when applying a cloth that has been created with the Remove-from-Mesh box checked. Bug Fix: Resolved a debug issue by splitting symbols from PhysX/APEX for Linux Add PhysX/APEX and running the symbols as a DebugNonUFS runtime file for dependencies. Deprecated: The Get Default Simulation Factory Class in Clothing Simulation Factory Class Provider is now deprecated. This is replaced by the Get Simulation Factory Class. Platforms Bug Fix: - Fixed the platform extension config location checks in C# to match C++ Bug Fix: Platform - When calculating screen density, we no longer fallback to 0. We instead report unknown, but we use 96 dpi as our default when returning screen density if we don't know it. This is both the default Windows assumes, and what we use internally for fonts in Slate. This allows us to at least do something reasonable on platforms that can't know the physical size of their screen. New: Add support for Apple Development and Apple Distribution certificates from Xcode 11. New: Updated DDSPI with bSupportsGPUScene but also refactored the .inl platform ones to make it easier to add more settings All Mobile Bug Fix: Addressed a case in Mobile Patching Utilities where UMobilePendingContent object could be GCed while content download is in progress. Bug Fix: Packaging will attempt to enforce the correct casing for project file paths to avoid problems when packaging/running builds from the editor to target platforms that are case sensitive (eg: Android & iOS). New: Mobile device profiles have been overhauled. These will not work for all users but are much more appropriate for shipping high quality mobile games. Removed device profiles for iOS and Android devices that are no longer supported. Android devices from minimum spec (Adreno 4xx/Mali T6xx) and above are now mapped to to Android_Low, Android_Mid, Android_High and Scalablity Groups 0,1,2 respectively. Vulkan now disabled by default, enabled only on Adreno 6xx Android 9+, Mali G72/G76/G77 and PowerVR 9xxx devices. The range of iOS devices map their post process and shadow quality using scalability groups. New: The safe zones specified in the PIE advanced options menu are now applied to mobile standalone PIE Mobile standalone PIE now defaults to the ES3_1 feature level. Android Crash Fix: Fixed a crash that can occur when audio manager properties are missing. Bug Fix: IsAllowedRemoteNotifications will now return false if GCM or Firebase plugin is not included for Android. Bug Fix: Added a missing extension to HLSLcc to enable GL_EXT_texture_buffer for atomic imagebuffer operation. Bug Fix: Added an option to force use of legacy ld instead of lld. Bug Fix: Launch notification events will now be verified before registering them. Bug Fix: Added an extra condition to check for 64-bit ABI support. Bug Fix: Provided NDK20 fallback tool paths for some architectures. Bug Fix: Disabled ld.gold for ARM64. Bug Fix: Fixed an issue with Stratus XL triggers not appropriately firing input events on Android. Bug Fix: Fixed bindings for DualShock controllers on Android devices running version 10 or greater. Bug Fix: Empty AAR directories on Android will now be ignored. Bug Fix: Fixed an issue with attempting to acquire WRITE_EXTERNAL_STORAGE when we already have it granted on non-shipping builds. Bug Fix: Fixed a potential error in detection of needed install batch files for Android. Bug Fix: Threshold trigger button pressed events are now optional by device type on Android. Some controllers do not need this functionality and will send double input events. Bug Fix: Fixed issue with PLATFORM_USED_NDK_VERSION_INTEGER. It will now show the proper NDK level used to compile. Bug Fix: Added AndroidRelativeToAbsolutePath to AndroidPlatformFile. Bug Fix: Fixed an input issue that can occur with floating keyboards on Android due to improper Y offsets. Bug Fix: Fixed an issue with the Android virtual keyboard interacting incorrectly with emoji when using backspace. Bug Fix: Fixed linker warnings with libvrapi.so. Bug Fix: Added bDisableFunctionDataSectionsoption for Android. Add this to DefaultEngine.ini under AndroidRuntimeSettings if you see an issue with R_AARCH64_JUMP26 being out of range with the linker. Bug Fix: Fixed a compile error with bUseNEONForArmV7=true for Android where vqtbx1q_u8 is an A64 instruction. Now has a fallback to use FPU version for VectorContainsNanOrInfinite. Bug Fix: Fixing readonly check for PlatformFile. Bug Fix: Now using libc++_shared STL on Lumin to match rebuilt Android libraries. Bug Fix: Deleted problematic copy constructors on named pipe. New: May now select path and filename overrides for OBBs from Java on startup. New: Added Engine/Extras/Android scripts to install needed components and set up environment variables for Android with new NDK 21 New: Enabled WEBM for Android media player. New: Added bDisableFunctionDataSectionsoption for Android. Add this to DefaultEngine.ini under AndroidRuntimeSettingsif you see an issue with R_AARCH64_JUMP26 out of range with the linker. New: Eliminated recursive file system search for cursor assets on platforms that don't support them. New: Changed FQueuedThread so that it doesn't wake up every 10ms while waiting for work. This wakeup is a waste of cpu time from having to context switch for no reason. New: Added more control over Gradle project. The following are available in UPL: — Sets ANDROID_TOOLS_BUILD_GRADLE_VERSION in gradle.properties — Adds settings.gradle Improvement: Updated small OBB limit to 1 Gb. Improvement: Improved the Network Changed Manager's ability to detect offline networks that are incorrectly identified as having a connection by the Android system. iOS Bug Fix: The iOSReplayKit plugin's Blueprint functions are now allowed to be used in the editor and on other platforms as stub functions. Bug Fix: Prevents an error being displayed in the log before staging if the bundle is not already installed. Bug Fix: GameCenter popup no longer appears on iOS even when GameCenter is deactivated. Bug Fix: Fixed issue with iOS remote build ssh command failing for some Windows users. Bug Fix: Made a change to prevent iOS provisioning parse failure when remote building on a Mac without internet access. Bug Fix: Re-enabled iOS Metal GPUtime by removing any overlaps between commandlists, including overlaps across frames. GPUtime is counted as close to the expected end of pipe time as possible. Bug Fix: Fixed FMetalShaderPipeline initresourcemask. New: Added support for thumbstick buttons and special left on iOS. New: Fixed an issue where packaging for tvOS would fail for Client targets. New: Increased the number of Metal blend state bits in the PSO key from 5 to 7 because we now pre-initialize many more blend modes. New: Added a project setting to allow files created by UE4 to appear in the iOS on-device Files app. New: Removed manual command buffer internal resource tracking and debug group association. All of the data this was tracking is available from the Xcode tools. The manual resource tracking did not interoperate well with the Metal validation layer and could cause a crash on exit in development and/or debug configurations. Linux Bug Fix: Fixed issue that prevented editor windows from restoring when minimized. Bug Fix: Added PLATFORM_LINUXAARCH64 to Platform.h Bug Fix: Set UserTempDir for Linux to $TMPDIR or fallback to /var/tmp Bug Fix: Added missing space to SanitizerInfo string. Bug Fix: Implemented module path name for Unix to avoid hard coded default path. Bug Fix: Disabled ispc for Linux AArch64 Currently using ispc v1.12 which has add "experimental AArch64" support. Bug Fix: Addref Vulkan memory allocation so it's released when not used by framebuffer and surface FrameBuffer ctor was creating a view directly onto a texture surface, but was not holding the allocated memory. If the texture was freed before the framebuffer was destroyed, we could run into use after free type issues. Bug Fix: Fixed Linux address sanitizer builds. Bug Fix: Reduced NullRHI static buffer size by using dynamic buffer Reduces allocation size & fixes potential silent buffer overrun. Bug Fix: Fixed issue with shell script not being able to handle quoted arguments. Bug Fix: Default LinuxAArch64 platform in binary now builds to the same default as Linux platform. Bug Fix: Don't add local launch device for Linux AArch64 platform Local device right now will never be an Arm64 device. Bug Fix: Set PrecompileForTargets to None for AArch64 builds of SoundVisualizations and Kiss_FFT. Bug Fix: Send 127.0.0.1 by default for Linux/Mac as well for UnrealInsights. Since there's no Event to close dont need to check for this. Bug Fix: Added win64 platform check for VisualStudioDTE. Bug fix: Fixed Linux build break with case sensitive headers. Bug Fix: FixedLinux build break (SEnumComboBox.h -> SEnumCombobox.h) Bug Fix: Fixed race when two threads are trying to create a folder on Linux. Bug Fix: Fixed command quoting in Linux chmod command. Bug Fix: Override GetPortableCallstack to avoid 1MB allocations in crash signal handler. Bug Fix:: Removed VulkanLinuxPlatform RenderOffScreen logging. Bug Fix: Fixed Linux build break (includes paths with backslashes) Bug Fix: Fixed UnrealBuildTool unused variable warnings. Bug Fix: Fix for ISPCTexComp dll not loading on projects on a different directory. Fixes this warning on UE4Editor startup: LogCore: Warning: dlopen failed: /epic/UE4.git/Engine/ThirdParty/IntelISPCTexComp/Linux64-Release/libispc_texcomp.so: cannot open shared object file: No such file or directory LogTextureFormatIntelISPCTexComp: Warning: Unable to load ../../../Engine/ThirdParty/IntelISPCTexComp/Linux64-Release/libispc_texcomp.so Bug Fix: Ensures start time for events is now greater than last end time for all nodes. Bug Fix: GpuProfilerEvent times now clamp to valid values, SanitizeEventTree was not clamping start times to be >= to previous root start times, so we were hitting asserts in TraverseEventTree() for: lastStartTime >= GpuProfilerEvents[Root].GetStartResultMicroseconds(). Also removed unused variables. Bug Fix: Now using secure_getenv() instead of getenv(). New: Missing toolchain warning now only appears on Linux + Win64. New: Added linux sanitizer information to build details if set. New: Build nvTriStrip and ForsythTriOptimizer with UE clang toolchain (link with libstdc++) Removed stdc++ dependency for MeshBuilderCommon. New: Added LinuxAArch64Server and LinuxAArch64Client build targets. New: Added VULKAN_ENABLE_DUMP_LAYER to VulkanLinuxPlatform.h. New: System compiler is no longer the default fall-back on Linux. We now only check for and use the system compiler if "-ForceUseSystemCompiler" command line is passed Also, no longer default to epic compiler if -ForceUseSystemCompiler is specified and the system compiler can't be found. New: Moved Linux architectures to Linux package project submenu. Added a PlatformSubMenu ini entry to DataDrivenPlatformInfo sections, and coalesced these in FPackageProjectMenu::MakeMenu(). New: Added a more descriptive error messages for Linux SDK Toolchain Not Found for three cases: .$ LINUX_MULTIARCH_ROOT=/tmp/foobar make BlankProgram bash "/epic/UE4-23.git/Engine/Build/BatchFiles/Linux/Build.sh" BlankProgram Linux Development Fixing inconsistent case in filenames. Setting up Mono Building BlankProgram... Unable to locate Linux SDK toolchain at /tmp/foobar/x86_64-unknown-linux-gnu. ERROR: GetBuildPlatform: No BuildPlatform found for Linux make: *** [Makefile:183: BlankProgram] Error 5 make BlankProgram ARGS="-forceusesystemcompiler" bash "/epic/UE4-23.git/Engine/Build/BatchFiles/Linux/Build.sh" BlankProgram Linux Development -forceusesystemcompiler Fixing inconsistent case in filenames. Setting up Mono Building BlankProgram... Unable to locate system compiler (-ForceUseSystemCompiler specified). ERROR GetBuildPlatform: No BuildPlatform found for Linux make: *** [Makefile:183: BlankProgram] Error 5 LINUX_MULTIARCH_ROOT= make BlankProgram bash "/epic/UE4-23.git/Engine/Build/BatchFiles/Linux/Build.sh" BlankProgram Linux Development Fixing inconsistent case in filenames. Setting up Mono Building BlankProgram... Unable to locate Linux SDK toolchain. Please run Setup.sh. ERROR GetBuildPlatform: No BuildPlatform found for Linux make: *** [Makefile:183: BlankProgram] Error 5 New: Added Linux AArch64 FreeType2 libraries. New: Added support for Linux offscreen Vulkan rendering, which is enabled with -RenderOffScreen flag. This fixes several Slate issues when rendering offscreen. New: Enabled Neon instrinsics for Linux AArch64. New: Moved to new llvm clang 9.0.1 v16 toolchain. New: Added bGdbIndexSection bool to LinuxToolChain. New: Added Linux AArch64 libs for FreeType2 v2.10.0, ICU 64.1, and HarfBuzz 2.4.0 Includes BuildForLinuxAArch64.sh cross compilation scripts Adds LinuxAArch64 back to installed engine builds. New: Linux now always uses lld linker with clang v9 and above. New: Can now generate .debug_pubnames and .debug_pubtypes sections in a format suitable for conversion into a GDB index. This option is only useful with a linker that can produce GDB index version 7. New: Added protected to bGdbIndexSection bool. New: New build of python2.7 for Linux now supports zlib, bz2, and ssl. New: Use msbuild in mono for Linux when using an installed copy and mono >= 5.0. New: Batch writing of ini files is now possible. Mac Bug Fix: Move GetTickableObjects() and GetPendingTickableObjects() definitions to cpp to ensure there's only a single instantiation. This fixes editor objects not ticking and updating sometimes. Bug Fix: Fixed a path encoding issue when drag-dropping files containing non-ASCII characters on macOS Bug Fix: Re-enable FMacPlatformProcess::IsSandboxedApplication. Bug Fix: Moved a couple of Cocoa calls that hide the window on Mac before it's destroyed to the main thread. This should solve a rare game hang at exit. Bug Fix: Fixed crash at exit on shutting down the PSO cache New: Enabled Metal RHI runtime virtual texture support. New: Implemented FPlatformMisc::GetCPUBrand() for Mac, based on Linux implementation. Also, switched FMacPlatformMisc to use __cpuid intrinsic. New: Enable Metal RHI runtime virtual texture support for macOS. Windows Bug Fix: Reset Windows key mappings when input language changes New: Added raw input simulation over remote desktop on windows since remote desktop uses absolute mouse position in mouse move events. This prevents the cursor from getting stuck over RDP. Rendering Crash Fix: Wait for streaming Virtual Texture transcode to complete before deleting file handles. Fixes a rare race condition crash when destroying then recreating a streaming Virtual Texture. Crash Fix: Fix import crash with low guide count for hair grooms. Crash Fix: Disabled render to Virtual Texture on primitives with no Static Mesh, which fixes a crash when inadvertently adding a render to Virtual Texture on a scene primitive component that doesn't support it such as a Brush component. Crash Fix: Fixed a crash due to giving a negative MipCount to "RHICalcTextureCubePlatformSize" from various engine Texture classes. Crash Fix: Fixed a crash when trying to ray trace the sky light prior to generation of sky light importance sampling data. Crash Fix: Fixed crash in Pixel Streaming Signaling Web Server when some users disconnected. Crash Fix: Fixed Vulkan crash on shutdown in TEST/SHIPPING. Crash Fix: Don't transition single mip individually. This is not needed, and also causes inconsistent layouts across the different mips, which fixes a crash when enabling edit layers on landscape. Crash Fix: Fixed crashing that could occur in Vulkan on exit in certain circumstances. Crash Fix: Fixed the handling of "FRHITextureReference" in Vulkan that could cause a crash when importing UDIM textures. Crash Fix: Fixed crashes that could happen in D3D12 when indirect argos buffer can be a previously pending UAV, which becomes a PS\Non-PS read. ApplyState will flush pending transitions, so enqueue the indirect arg transition and flush afterwards. Crash Fix: Fixed a crash that could happen when "OnWorldCleanup" is called with "NewWorld == this". This happens when loading levels through the Content Browser and modifying the "Levels". In this case, do not clean up the persistent uniformbuffers of the FScene, as it will not be deleted Crash Fix: Applied a fix for editor crashes when pressing Build. Crash Fix: Fixed a D3D12 crash where it initialized over garbage data in the StreamOut section of D3D12_GRAPHICS_PIPELINE_STATE_DESC. This also initialized D3D12_COMPUTE_PIPELINE_STATE_DESC and added missing validation code to PSO creation. Bug Fix: Added fix for incorrect vertex indexing in canvas tile renderer. Bug Fix: Fixed a missing barrier in "RenderUnderWaterFog." The pass was not calling "CopyToResolveTarget" and therefore the render target was left in a writable state. Bug Fix: Fixed an issue with a nullptr vertex buffer being passed in with "FSkinWeightLookupVertexBuffer". The "GetNeedsCPUAccess" check was overriding the check for valid "VertexBufferRHI", when setting bSRV, in FSkinWeightLookupVertexBuffer's "InitRHI" function. Bug Fix: Added debug names to Distance Field lighting resources to aid debugging. Bug Fix: Fixed the creation of cube array resources in the reflection environment. The old code was clamping the minimum number of cubes in the reflection environment cube array to 2. This would force that the RHI resource was always an array, rather than a single cube. This is no longer necessary. Bug Fix: Fixed incorrect barriers in Light Propagation Volumes. Auto-write was enabled on the LPV targets despite the LPV system handling the required barriers manually. Bug Fix: Fixed missing resource barriers in "PrecomputedVolumetricLightmap.cpp." Bug Fix: Resolved an issue that was causing negative draw call counts. Bug Fix: Fixed a validation error in "FVulkanResourceMultiBuffer" to store the size explicitly when using BUF_Volatile when updating buffers larger than the default MaxSize. Bug Fix: Moved Vulkan indirect draw barriers into Transition Resources making it impossible to transfer a resource used as a normal shader resource, which is rarely done in practice. A transition refactor will fix this. Bug Fix: DX12 bulk data texture upload now works properly with mipmapped texture arrays and compressed textures. Bug Fix: Fixed an issue with FRHIGPUFence by adding a thread safe counter to prevent "Poll" from returning wrong values while "Enqueue" hasn't yet been processed. Bug Fix: Fixed an incorrect check for usage of scene depth by a material. Bug Fix: Fixed a tangent space computation for Material Billboard component. Bug Fix: Fixed "CopyToStagingBuffer" implementations on D3D12. Other platforms apply the Offset parameter to the source buffer address only. Bug Fix: Applied a fix for case when "MAX_SRVS == 64" and "CurrentShaderSRVCounts == 64". Bug Fix: Added a fix for textures accidentally not getting any inlined mips. Bug Fix: Added vertex buffer debug names to DX11 RHI. Bug Fix: Some 64-bit fixes for "FTextureSource" that fixes import of large UDIMs. Bug Fix: Standardize "-novendordevice" on D3D11, D3D12, and Vulkan. Bug Fix: Fixed compiler warnings in MCPP and made main arguments a dynamic array when "MAX_OPTIONS" is exceeded. Bug Fix: Fixed incorrect calculation of tangent space in landscape vertex shader. Bug Fix: Fixed incorrect calculation of Runtime Virtual Texture volume bounds when copying from a component with zero volume bounds. Bug Fix: Fixed calculation of footprint offset for DX12 texture copy. The previous logic was only correct if copying a mip or slice range starting at zero. Bug Fix: Fixed an issue computing wrong sizes for mips of non-power of two UDIM texture when computing Virtual Texture address wrapping. Bug Fix: Fixed issues with the Virtual Texture Streaming property in the Texture Editor that would be hidden if the project has Virtual Texturing disabled. This also includes a fix for the Thumbnail/editor text that reflects the actual VT state of the texture, not just the property being set. Bug Fix: Fixed lifetime issues with VulkanShaders deleting themselves while still needing to be used. Bug Fix: Fixed DiaphragmDOF on data driven platforms. Bug Fix: Added shader index to FD3D12RayTracingShaderTable's "FShaderRecordCacheKey." This fixes a problem where same resources may be bound at the high level to records that have potentially different local root parameter layouts. Bug Fix: Making overlay menu button transition on click from + to x when menu is opened/closed. Bug Fix: Fixed the wrong atmosphere being applied on opaque objects in front of it. Bug Fix: Fixed rare validation error issues with transition textures that are not render targets or depth targets to GENERAL initially. This makes Vulkan work as D3D reading uninitialized textures used to create a validation error. Bug Fix: Fixed unity test breakage caused by Vulkan global function by fixing a typo inside "DumpMemory". Bug Fix: Added option to do a single renderpass for shadows which fixes flickering on some devices and improves performance. Bug Fix: Fixed a sample out-of-bounds issue in single layer water pass. Bug Fix: Fixed potential denormalized quaternion when manipulating atmospheric lights. Bug Fix: Fixed hair strand bounding box when attached to Skeletal Mesh. Bug Fix: Moved Abstract specifier out of "HideCategories" list in "SceneCaptureComponent". Bug Fix: Removed hair from environment capture and Planar Reflection capture. Bug Fix: Fixed HISM not updating scale and rotation in the game mode. Bug Fix: Fixed the wrong hair width being displayed into the group option in the Details Panel. Bug Fix: Applied fixes to DX12 for potential erroneous transition when using "ReadSurfaceData/GetStagingTexture:. Also, fixed "ReadSurfaceData" of DXGI_FORMAT_R8_UNORM render targets. Bug Fix: Fixed non-determinism in cooked UMapBuildDataRegistry. Bug Fix: Fixed dummy texture being overwritten in some cases. Bug Fix: When computing instances dithering LODs, offset instances by the mesh pivot to match CPU LOD logic more closely. Bug Fix: Wait for Streaming Virtual Texture transcode to complete before deleting file handles. Bug Fix: Fixed responsive TAA on translucent materials. Bug Fix: Applied fix for FConditionalScopeResourceBarrier-related assert to not trigger a resource transition for resources that don't have state tracking enabled when the source and destination state are identical (in the case of the landscape edit layers : CPU readback resource, whose state is always D3D12_RESOURCE_STATE_COPY_DEST) This could be handled on the client side by not using a FConditionalScopeResourceBarrier object in the first place but this would require the client to do pretty much what the scope object does internally. Bug Fix: Added a workaround for corrupt instancedata. Bug Fix: Applied fix for FXSystem being reused across level changes. Bug Fix: Added Option to prevent multithreaded PSO creation. Temporarily disabled multithreaded PSO creation on some problematic devices. Bug Fix: Fixed an issue where bHasInputAttachments was never initialized. Bug Fix: Release uniformbuffers when the world is cleaned up. This releases a lot of render target references. Bug Fix: Release FXSystem GPU resources "onworldcleanup." This frees some lingering render resources that are not free until the UWorld is released, which it might never be. Bug Fix: Fixed two validation issues in Vulkan caused by the Track Editor Thumbnail. Bug Fix: Fixed computation of miss shader table size. Bug Fix: Added missing Vulkan pixel formats. Bug Fix: Fixed an issue with "RHIReadSurfaceData" not allocating a large enough buffer. Fixes an issue with texture painting. Bug Fix: Fixed Vulkan shutdown issue that could happen when refcount from pending state would hit 0. It would end up modifying the array being iterated on. Bug Fix: Fixed Vulkan "RHIReadSurfaceData" when reading back mip levels other than 0. Added missing transition back to Readable in LandscapeEditLayers and added VULKAN_EXTERN_EXPORT for VULKAN_ENABLE_IMAGE_TRACKING_LAYER. Bug Fix: Fixed an issue with Sky Light triggering a check when set to 0 ray traced samples per pixel. Bug Fix: Fixed more incorrect NEE Pdf calculations due to not taking exact material determination and ordering into account. Bug Fix: Fixed erroneous calculation of NEE Pdfs for environment contribution off of diracs. Bug Fix: Fixed HitT not set on full payload. PackedPayload is overwritten at the end of the shader. Bug Fix: Fixed incomplete tool tip for building static lighting, that was giving misleading information on the reasons building lighting could be disabled. Bug Fix: Fixed mip sky light pdf texture and solid angle texture defines set in the incorrect location for sky light direction sampling. Bug Fix: Fixed shader compile fails when 'GBUFFER_HAS_TANGENT' is enabled. Bug Fix: Applied a fix for Background Blur in UMG being filled with black when UI.CompositeMode is used in HDR. Bug Fix: Shader compile log messages now appear during shader compile instead of after all shaders compiled. Bug Fix: Updated debug name of render targets when they are recycled. Bug Fix: Disabled "r.Vulkan.UploadCmdBufferSemaphore" to fix a validation error on shutdown. Bug Fix: Added a fix to prevent overwriting perv cloth xform since cloth xforms are written multiple times per frame. Bug Fix: Fixed pixel depth offset invariance between depth pass and base pass that was causing z-fighting in some cases. Bug Fix: Write out 0s instead of FLT_MAX when morph target LOD's bounds are absent (lower LOD isn't using full morth target buffer). Bug Fix: Fixed a typo for initial layout transition wasn't actually using the intended layout type. Bug Fix: Fixed inverted V component when using hair root UV. Bug Fix: Do not perform project setting based validation on engine visualization materials during DDC warming. Bug Fix: Make AO decals respect meshes that disable receiving decals. Bug Fix: Refactor RT selection during base pass into a single function. Expose helpers to retrieve used RT for specific systems. Bug Fix: Exposed pre-skinned vertex buffer to local vertex factory so that pre skin vertex position is available to material nodes that use it when derived vertex factories generate position buffers that are post skinned. This is particularly relevant for GPUSkinCache. Bug Fix: Removed applying decal fade all the time in the shader. It is by design that the fade parameters are exposed to the material and it will decide how "fade" is applied to the material. Bug Fix: Fixed decals reading GBufferA. It cannot be always indiscriminately set as a RT. Packing active RT together as we do with base pass shading. Bug Fix: Handle RT bindings for decals that write or read to Gbuffer.normals. New: UAV's in the PS stage are now bound as normal resources, and are no longer tied to "SetRenderTargets." New: Added sparse voxelization for hair strands. This allows you to have thinner details on hair and fur when not using "Deep Shadowmap" enabled on lights. New: Added "FrameIndex" override to "FSceneView." New: Adjusted default streaming priorities for Virtual Textures so that they now match the standard texture streaming priorities. New: Added support for export of Runtime Virtual Texture streaming mips to BMP image format. New: Added source mesh slot (optional) into the Groom Component to project hair on a different meshes than the Skeletal Mesh to which a Groom Component is attached. The source and the target mesh need to share the same UV space. New: Added support for Crunch compression of Runtime Virtual Texture streaming mips. New: Added Groom Binding Asset that stores the binding information of a groom onto a Skeletal Mesh. New: Added Matrices.ush which provides functions to build projection and look-at matrices identically from the CPU version. New: Reduced size of "FFileCache" (the file cache used by Virtual Texture system) from 32MB to 16MB. You can make the size configurable with the console variable "fc.NumFileCacheBlocks." New: Increased the SRV limit in DX12 from 48 to 64 so a shader with 50 SRV fits. New: Added visibility pass for hair rendering that improves performance (enabled by default). New: Added support for light channels when hair strands. New: When registering a component, "CaptureOnMovement" is now considered. This prevents updating the render target in the editor when "CaptureOnMovement" is disabled. As a result, when it is disabled, the user is responsible for initializing the render target and update whenever they need. New: Added guard for UPrimitiveComponent's "Set Custom Primitive Data Internal" from being sent an invalid index below zero (it already guarded the upper bounds). New: Added hair group information onto groom instance and hair component, such as Hair width, and the root and tip can be overridden. New: Created streaming Virtual Texture staging textures with flags dependent on whether we are running in multithreaded rendering mode. This fixes very slow performance for D3D11 non-multithreaded mode. New: Changing image writing API from TArray to TArray64 data so images can be over 2 billion pixels. New: Add new Runtime Virtual Texture format "BaseColor_Normal_Specular_Mask_YCoCg." This adds an additional 8 bit "mask" channel for generic use in RVT materials. New: Added validation and time estimate when importing a Groom asset. New: Added multi-shader variant of "ClearUnusedGraphResources." New: Added "CanCreateNew" function to "UTexture2DArrayFactory." New: Added "GRHIMinimumWaveSize" which can be used to query the smallest GPU SIMD width possible on current GPU. Value can be between 4 and 128, depending on the GPU architecture. New: Added "bSupportsWaveOperations" to "FDataDrivenShaderPlatformInfo." New: Added 64-bit support to jpeg and EXR file writing. New: Implemented FD3D11DynamicRHI::RHIBlockUntilGPUIdle function. New: Added DXC integration into Vulkan shader compiler. The DXC path for Vulkan can be enabled with console variable "r.Vulkan.ForceDXC." Currently, this is only supported on Windows platform. New: Renamed "r.DumpSCWQueuedJobs" to "r.ShaderCompiler.DumpQueuedJobs." New: Renamed "r.SkipShaderCompression" to "r.Shaders.SkipCompression.": Added support for using "VkImageFormatListCreateInfoKHR: for sRGB textures. New: SkyAtmosphere component is now accessible from the Actor instance. New: Added support for creating structured and vertex buffers with initial CPU data through the render graph. New: Converted FRHIAsyncComputeCommandList" to "FRHIComputeCommandList." Moved "FRHICommandList" to inherit from "FRHIComputeCommandList." New: Added option to compute SkyAtmosphere transmittance per pixel. This is controlled with the console variable "r.SkyAtmosphere.TransmittanceLUT.LightPerPixelTransmittance." New: Added support for the use of GPU assisted validation using the console command "r.Vulkan.GPUValidation." New: Implemented AMD D3D12 vendor extension support (with custom UAV root signature binding). New: Added Global Uniform Buffer Support in the RHI. New: Implemented bSupportsUInt64ImageAtomics in "DataDrivenPlatformInfo," along with "COMPILER_SUPPORTS_ULONG_TYPES" and "COMPILER_SUPPORTS_UINT64_IMAGE_ATOMICS" compiler definitions, and a generic "ImageInterlockedMaxUInt64" abstraction. New: RHI Viewports now take float parameters. New: Added bSupportsRTIndexFromVS to \DataDrivenPlatformInfo. New: Added generic hooks for primitive shader support. New: Implemented GRHISupportsPrimitiveShaders capability flag. New: The command line "-NumAFRGroups=" now implies AFR in addition to setting the number of AFR groups; if just "-AFR" is specified "NumAFRGroups" is set to "MaxGPUCount", as before. New: Added support for temporal history upscaling on data driven platforms. New: Upgraded AGS to 5.4. It enables 64-bit atomic support on AMD hardware, as well as custom UAV bind slots in D3D12 (among other features). New: Added Vulkan support for exclusive fullscreen extension for Windows. New: Added an option the SkyAtmosphere component for the atmosphere to follow the SkyAtmosphere's transform. New: Added virtual planet shadow when sampling transmittance per pixel to the SkyAtmosphere. Transmittance lut does not contain planet shadow so shadow from one planet on another was incomplete in space view. This is achieved using a simple ray/sphere intersection for now. In the future, we'd like to integrate it in the transmittance lut (last texel data on mu). New: Added console variable to control fine-grained RHI command profiling events. Use "r.RHICmdTraceEvents=1" to enable tracing events (off by default). New: Make Buffer Visualization can config with Material Instances. New: Support for shader categories, for shader cook statistics. Added "CompileTime" stat with some tool tips. New: Added tracking code to help debug memory leaks from Vulkan objects. This is disabled by default. Use "r.vulkan.dumpmemory" to dump when enabled. New: Added "MinBitrate" console variable, that makes PixelStreaming ignore video encoder bitrate recommended by WebRTC if it drops below MinBitrate value. This can be useful to sacrifice interactivity for better video quality on low-bitrate connections. It is disabled by default. New: Added stats for shaders compiled by materials. "CookByTheBook" now outputs a .csv file into Saved/MaterialStats. Statistics View now lists all the csv files from that folder. Load is done lazily Added shortcut key in Content Browser MaterialPath is now serialized into a shadermap (if debug info is allowed). New: Vulkan Graphics PSO cache rewrite/cleanup. Multithreaded creation of PSOs, which should reduce hitches and make optimizing shaders faster. It only uses one map&lock and has one type. It uses UE4 shaderkeys instead of bytecode shaderhash It enables parallel PSO creation We added the "VulkanPSO" stat group LRU is now compiled in but disabled on all Vulkan platforms. New: Added Support Virtual Texture Streaming for Vulkan Linux. New: Added opaque shadow mask permutation for hair for evaluating the transmittance instead of the front depth. This helps to attenuate shadow cast by groom when the hair strands are small/thin. New: Swapping Bcrypt node module to native js version Adding npm console commands to ease usage. New: The Virtual Texture category is hidden on non-UMeshComponent. New: Added a parameter for the lens effects principal point to not be in the center of the screen. Needed for overlapped rendering. New: Added supersampling permutation for hair transmittance evaluation, which is used for cinematic quality. New: Added an option to disable Z-ordering for world widgets (Slate.WorldWidgetZOrder = 0). Disabling Z-ordering may improve widget batching and rendering performance but could result in an incorrect rendering order of overlapping widgets. World widget Z-ordering is enabled by default. New: Pre-exposure is now being forced to always be enabled shaders. New: Added a RenderDoc "CapturePIE" command which starts a PIE session and captures the specified number of frames. Useful when debugging initialization problems, as it allows the first frame of a PIE session to be captured. New: Added ability for selective Skin Cache on Skeletal Mesh Components. Skin Cache can be specified per LOD, and the USkinnedMeshComponent can override per LOD what the Skeletal Mesh Component specifies. There is also a project setting to use as Skin Cache as inclusive or exclusive.: Updated the nvTextureTools libraries for Win64. New: Added DX11 support for EnumAdapterByGPUPreferences. This allows OS assisted selection for the GPU. Use the command line "-gpupreference=[n]" to set the selection. 1 will set the preference to minimum power, and 2 (the default) will set the preference to highest power. Any other value causes it to fall back to the old code. Also, upgrade DXGI to the latest. New: Enabled "SingleLayerWaterIndirectDraw" for Vulkan desktop. New: Added a console variable to skip shader compression with "r.Shaders.SkiCompression." Enabling this can save a significant amount of time when using debug shaders. New: Added limited forward rendering support for hair strands rendering. New: Added ability for WPO and Tessellation to render into velocity pass even if the Actor has not moved. This behavior can be toggled with the console variable "r.BasePassOutputsVelocity." By default it is set to 0 and not enabled. New: Downgraded missing "TexCreate_ShaderResource" error to be non-fatal. New: Exposed "AllowShaderWorkers" to console variables so that it can be set from CansoleVariables.ini. New: Shader compiler warnings can now be stored in the DDC and re-emitted during a cook. Logging is gated on "r.ShaderCompiler.EmitWarningsOnLoad" (off by default). New: Renderer settings should not be hidden behind Defines. Moved ray tracing's texture LOD console variable out. This fixes all non-raytracing platforms. New: Added updates for how we call the Metal compiler, version our DDC key, and find our toolchain. The following has changed: Uses "xcrun" to invoke xCode tools. Picks the SDK based on ShaderPlatform. Fetches the correct version from Metal. Command line "-v"determines the binary path and header paths through Metal. Command line "-print-search-dirs" uses xcrun for Metal-ar and metallib. This should prevent us from mixing versions of metalfe, metal-ar, and metallib (and the headers. Improvement: improved Vulkan LLM support to track render target and spare memory usage. Improvement: Improved Clear Coat shading model to support directional sources. Improvement: Added code for Vulkan to prevent overlap of upload and graphics buffers. Improvement: RenderDoc plugin improvements: In the plugin options, we can now set a delay before the capture takes place after hitting the button, either in seconds or in ticks. The default delay is still 0. In the plugin options, we can choose the number of frames to capture. The default is still 1. This implies the capture turns into a "capture all activity" as we cannot selectively capture a single viewport across frames. As before, the options are accessible with the command line so that this also works in PIE mode. Fixed the notification that stayed on screen forever in PIE mode. The notification shows when a delayed capture will start as well as which frame is currently being captured. Deprecated: "BUF_UINT8" and "BUF_UAVCounter" have been deprecated. Deprecated: Added "RHIClearUAVUint" and "RHIClearUAVFloat" functions and deprecated "UAV" global functions in ClearQuad.h: The ClearUAV implementation for structured buffers was not correct and would lead to D3D debug layer validation errors. This is because it is not allowed to bind a structured buffer UAV to a type RWBuffer<> shader parameter, as found in the clear replacement CS. Use of "NumBytes" was also not correct. The ClearUAV functions using this would assume the UAC format is R32_UINT and divide the NumBytes parameter by 4. For example, if the format was R8_UINT, the divide by 4 means only ¼ of the resource was cleared. Deprecated: "ViewCustomData," "CustomLOD," and "StaticMeshBatchVisibility" are now deprecated and will be removed in the next engine release. Removed: Deleted "SRGBO_ForceEnable" flag, and replaced the one usage of it with "SRGBO_ForceDisable". Removed: Removed "Default Virtual Texture Material" as this was creating unnecessary shader compilation overhead from being a special engine material. API Change: With the new functions, rendering code should call either the "Uint" for Float version depending on what the underlying format of the UAV is. Everything the UAV covers is cleared, so there is no need to pass "NumBytes" anymore. Structured buffers are treated as "R32_UINT" in both the Uint and Float implementations. The X component of the value vector is copied directly into the buffer with no format conversion. This matches D3D11 semantics. It is invalid to call "RHIClearUAVFloat" on an integer-format UAV, or "RHIClearUAVUint" on a float-format UAV. This leads to D3D debug layer validation errors due to mismatched RWBuffer type binding. The caller is also responsible for handling appropriate resource transitions. The underlying resource must be writable when calling "RHIClearUAVFloat" or "RHIClearUAVUint." Ray Tracing Crash Fix Fixed potential crash when ray tracing Instanced Static Mesh with invalid render data. Crash Fix: Fixed a driver crash when scrubbing in the Sequencer with ray tracing enabled with geometry cache. The index buffer was modified by the render thread while the RHI thread was reading it for building the ray tracing acceleration structure. Crash Fix: Fixed a crash when creating ray tracing geometry if any section has a null vertex buffer. Crash Fix: Fixed a crash when isolating materials in the Static Mesh editor with ray tracing enabled. Crash Fix: Fixed a crash when in Path Tracing or Ray Tracing Debug view modes when "r.raytracing.forceraytracingeffects" is set to 0. Bug Fix Clear Coat custom data channels are correctly populated in the ray tracing payload. Bug Fix: Fixed light's "Affect Global Illumination" flag being ignored by ray traced global illumination. Bug Fix: Ray Tracing Reflections bounces are clamped to a minimum of 1 to avoid artifacts if there were no bounces. Bug Fix: Changed geometry tolerance of primary rays in translucency pass to avoid artifacts due to transparent objects contacting opaque objects. Bug Fix: Applied a small fix in ray tracing reflections where "AccumulateResults" early return was missing. Bug Fix: Fixed incorrect usage of Indirect Irradiance in packed ray tracing payload. Renamed members of FPackedMaterialClosestHitPayload to avoid accidental use of packed members instead of using accessors. Bug Fix: Fixed missing GPUSkinCache barrier transitions when ray tracing is disabled. Basically we would queue up back-to-readable transitions after updating the cache, but only flush this list if ray tracing was enabled. This change reworks the logic to make skin cache barrier flushing independent of ray tracing. Bug Fix: Fixed reflection captures were not working properly in ray tracing because material bindings were incorrect due incorrect views handling in WaitForRayTracingScene. Fixed ray tracing translucency alpha channel in Composure. Bug Fix: Fixed resource state tracking and transitions for ray tracing resources. Bug Fix: Fixed ray tracing was not taking into account layer visibility to add and exclude objects from the ray tracing world. This was affecting Composure layers with ray tracing. Bug Fix: Forbid ray tracing shaders in the landscape thumbnail render. Bug Fix: Fixed an uninitialized out parameter causing undefined behavior in the spot light estimator. Fix other similar instances through path and ray tracing shaders to prevent similar issues in the future. Bug Fix: Fixed Sky Light Color not being set properly in ray tracing global illumination. Bug Fix: Fixed Static mobility in Sky Lights not generating required information when ray tracing is enabled. Bug Fix: FGenerateReflectionRaysCS is not compiled when ray tracing is disabled. Bug Fix: Fixed incorrect throughput calculation path tracing specular reflection material sampling. Bug Fix: Initialized the path tracer's previous material payload to fix validation errors. Bug Fix: Fixed old path tracing regression where the payload was expected to carry previous material hit information upon a miss (for MIS with next-event estimation). Bug Fix: Fix Path Tracer specular transmission material over and under contribution due to incorrect throughputs and pdfs. Bug Fix: Fix checks for setting Sky Light related parameters to rely on what they require rather than if the sky light should be ray traced to allow proper functionality in path tracing and other passes. functions. New: Implemented batched ray tracing material bindings. Materials are now bound to the ray tracing scene using async tasks, which saves approximately 1.6 milliseconds of render thread critical path in the "Infiltrator" scene. New: Packed ray tracing payload is now used during the main ray traced reflection rendering. Same payload instance is used for main material ray trace and for shadow rays during direct lighting. This significantly improves performance for the regular code path when not using miss shaders to evaluate lighting. New: Implemented lighting calculation using a miss shader that's invoked during shadow ray tracing in reflections and translucency. This removes a significant amount of code from ray generation shader and results in up to 2x speedup for reflection rendering in many scenes. The miss shader code path is enabled by default and can be controlled with "r.RayTracing.LightingMissShader." Also, moved some common lighting resource initialization from per-effect to per-view. New: Added parallel GatherRayTracingWorldInstances which makes critical path time of this function approximately 2x faster in a typical large scene on a multi-core CPU. New: Added support for GPU Niagara Meshes in ray tracing. Instances are copied directly from the Niagara GPU float buffer into the ray tracing instances descriptor GPU buffer through a compute shader. New: Added support for disabled sections for dynamic meshes in ray tracing New: Added "bEvaluateWorldPositionOffset" for ray tracing changed from BlueprintReadOnly to BlueprintReadWrite. New: Ray tracing reflections now evaluates the bottom layer before the top one. This is because the top layer data needs to stay live until the bounce is computed, so allowing the bottom layer to execute first reduces overall live state. New: Added option to render opaque objects only in ray tracing debug view modes (r.RayTracing.DebugVisualizationMode.OpaqueOnly - Enabled by default). New: Implemented "SV_InstanceIndex" emulation for ray tracing shaders. New: Added an option to allow for a ray traced Sky Light contribution in reflections. The existing Sky Light functionality has been refactored to allow for better code reuse when using ray traced sky lighting sampling. New: Texture LOD for ray tracing shaders is now a per-project setting, disabled by default. New: Implemented support for binding different vertex buffers per ray tracing geometry segment. Removed the skin cache vertex buffer merging step, improving GPU performance. Deprecated RHICopyBufferRegion/s, as it was exclusively required for skin cache VB merge. Unified bottom level acceleration structure build and update APIs. New: Exposed "Force Opaque" in Ray Tracing Flag for Static Mesh Assets. New: Added base instance index to ray tracing hit group system parameters. This can be used to emulate SV_InstanceID in hit shaders. New: We now allow unique custom user data to be provided per instance when native ray tracing instancing is used. Previously only transforms could be unique. New: Added support for binding multiple ray tracing miss shaders with custom local resources. New: Ray Tracing shader bindings array size increases to match D3D12 RHI limits. New: Deduplicated flag generation in ray tracing reflections and move ray tracing reflection related consol variables and additional flag generation into the proper file. New: Added options to turn off compilation of ray tracing material closest or any-hit shaders. This may be useful for titles that use custom shaders or don't require full material evaluation (For example, only using ray tracing for shadows or AO). This adds new console variables "r.RayTracing.CompileMaterialCHS" (default 1) and "r.RayTracing.CompileMaterialAHS" (default 1) The value is read-only and must match between cook-time and run-time. New: Added flags to improve ray tracing performance on visibility rays not requiring hit depth. New: Implemented simplified ray tracing material hit shaders which forces everything to diffuse and no static lighting evaluation. New: Improve generic TraceRay helper functions and replace existing common TraceRay calls which fit established material and visibility ray patterns. New: Ray tracing reflection rays are not traced from pixels that have the "Unlit" shader model. New: Added better messaging when ray tracing reaches the maximum number of sections supported per mesh. New: Enabled multi-bounce refraction from interface tracking and model total internal reflection in ray tracing translucency. New: Shadow rays in RTGI are shortened by default to avoid hitting the sky sphere making it possible to get desired sky reflections without affecting GI. New: Added an option to disable hair ray tracing when the engine is in ray tracing mode. New: Improve path tracing sky light sampling to use normal-based SH irradiance rather than a highest-mip cubemap guess. New: Implemented basic tiled mGPU support for the path tracer. New: Implemented wiper mode to compare the current image output with the path tracer. It can be enabled with the console variable "r.pathtracing.wipermode." Path Tracing view mode must be active. New: Added DefaultLit material model to path tracer New: Added ClearCoat material model to path tracer. Improvement: Improved multi-sample ray tracing reflections. Multi samples per pixel is now handled from a high level loop. The reflections shader adds an extra stage that reads the current accumulated color/hit distance on anything beyond pass 0, and only applies the final weighting on the final pass. FX - Cascade and Niagara Crash Fix: Fixed crashes that were occurring when a user performed an Undo/Redo in the system and emitter editor. Crash Fix You can now access the Niagara shader compilation manager through a static Get() function instead of through a globally initialized variable. This stops the compilation manager from crashing in monolithic builds, where it attempts to access FCommandLine during global construction where FCommandLine has not been initialized. Crash Fix: Fixed a crash that occurred when the user duplicated an emitter through the right-click menu. Crash Fix: Fixed crash that occurred when shutting the engine down if Niagara shader compiles were still in process. Crash Fix: Fixed a difficult to reproduce shut down crash. Crash Fix: Fixed a problem with random crashes that occurred when closing the Niagara system and emitter editor. Crash Fix: Fixed crashes that occurred when the user loaded broken data. Crash Fix: Fixed a crash that occurred when adding an Event Handler. This was caused by system instances being active, and the event handler modifying the cached data. Crash Fix: NiagaraDataInterfaceVectorField was crashing when trying to bind a vector field texture with a null render thread representation (likely a timing or init order problem). This is now fixed. Crash Fix: Fixed a bug that could lead to a crash when a variable default value was not initialized. Crash Fix: Fixed a bug where changing the type of a static switch could result in a crash. Crash Fix: Fixes system/emitter editor crashes when creating a circular reference in the Niagara Graph. Bugfix: Fixed distortion of velocity-aligned particle sprites at low velocities. Bugfix: Fixed issue where a Particle System Component would not tick with its owner's time dilation when using the Particle System Component Manager. Bug Fix: Added support for uint32 vertex indices when using Niagara ribbons. Fixed a bug occurring with Niagara ribbons when requiring more than 16-bit indices. Bug Fix: Fixed issues that were causing compile messages to show up inconsistently in the system and emitter editors. Bug Fix: Fixed an issue with GPU fences so that Clear() will result in Polls failing until the fence has actually been hit. Bug Fix: New user parameters now correctly enforce having a unique name. Fixed a bug occurring in the use of Undo/Redo that caused the preview system to stop running. Bug Fix: StaticMesh interface is now more stable when no mesh is provided, or when CPU access is not enabled on the mesh provided. Bug Fix: Fixed a bug that could prevent the SimulationTarget static switch to work for GPU emitters. Bug Fix: Reset system simulation when setting fixed bounds, so the Preview panel shows the new bounds even when paused. Bug Fix: Cleaned up the registering of Niagara emitter events to resolve multithreaded access. Bug Fix: Made Niagara material loading correctly pass the "loaded from cooked material" parameter around, so that we can load cooked data. Bug Fix: Removed the "Simulating" overlay from the Niagara System Overview Graph and the Script Editor Graphs since modifying Niagara graphs in real time is supported. Bug Fix: Fixed an issue with the CPU async collision queries within Niagara. Bug Fix: Updated the compile error for disconnected numeric pins in module scripts, so that the error tells the user how to fix the issue and provides links which navigate to the issue. Bug Fix: Allocated space in Niagara ParameterStore for missing parameters, to fix potential memory overwrite. Bug Fix: Fixed a problem where Niagara curves were not rendering in the Curve Editor if the curves are linear (all keys have the same value). Bug Fix: Matrices now default to the identity matrix by default when they are added. Bug Fix: Updated force and velocity solvers to ensure that Drag stays entirely framerate independent, even at extremely high values. Bug Fix: The Scale Color and Scale Color by Speed modules now properly stack with each other and with duplicates in the stack. Bug Fix: The Curl Noise Force module is now correctly deterministic when it is used in "Pan Curl Noise" mode. Bug Fix: Fixed simulation inconsistency issues when enabling and disabling emitters. Bug Fix: Fixed ownership of FNiagaraParameterStore when dealing with hierarchies of Blueprint. Bug Fix: Fixed major performance problems in the system editor when editing systems with a large number of emitters (8 or more). Bug Fix: Fixed an issue where dependencies were not refreshing correctly in emitter scripts. Bug Fix: Added the "Library Only" option to the module and dynamic input reassignment menus, which are used when assigning new scripts to modules, and also for dynamic inputs with missing scripts. Bug Fix: Fixed an issue where redundant Niagara GPUComputeShaders were being saved into cooked builds, resulting in non-deterministic cooked builds. Bug Fix: Data Interface functions which need per-instance data can now be called from custom HLSL on CPU emitters. Bug Fix: It is no longer possible to rename emitters to reserved names: [Emitter, Engine, User, Local, Module, NPC, Particles, System, Transient]. Doing so could cause a compiler failure in the system. Bug Fix: You can now undo Niagara module deletions after using the Trashcan button. Bug Fix: Fixed a barrier error in NiagaraEmitterInstanceBatcher::SortGPUParticles. Need to reissue a ComputeToCompute RW barrier after each simulation dispatch (at least until UAV overlap is implemented instead). Bug Fix: Removed premature error logging from a step in the enum loading process. Real errors are handled after both methods are tried. Bug Fix: When you focus the selected emitter in the Niagara Preview viewport, it no longer disables the orbit camera. Bug Fix: When you delete an isolated emitter, it now clears the isolated status of the system itself. Bug Fix: Fixed bool access to NiagaraParameterCollection. Bug Fix: Fixed a "No mesh assigned error" being erroneously reported when using the Sample Skeletal Mesh Skeleton module. Bug Fix: Fixed an issue around destroying a Niagara Component on the same frame as a tick group promotion has been requested. You can reproduce this issue if you have the instance change tick group during system sim ticking, to a tick group before the current one, and then removing the instance before post-actor tick (such as DestroyComponent being called on the owner). Bug Fix: Fixed a deadlock that occurred if TickDataInterfaces destroys the system instance; it will still flag for async work and finalize even though it is complete. Bug Fix: Mark Niagara Component parameter stored UObjects as dirty after reachability analysis; this fixes an issue where instances can reference stale UObjects. We may be able to refactor this later, but for the moment this fixes a pretty nasty GC related issue. Bug Fix: Fixed a race condition where unbinding parameters could be modified while the instances are still ticking. Bug Fix: Cleaned up ExecIndex and implemented Engine.ExecutionCount for GPU systems. Bug Fix: Fixed an issue with races in Volume / Texture data interfaces; it was reading from data set by the RT when determining if the texture is valid. Bug Fix: Fixed an issue with having a RenderTarget still bound that is going to be used as an SRV during Niagara's compute pass. Bug Fix: Fixed an issue that occurred if a GPU emitter was disabled on the first frame and never ticked. Bug Fix: Fixed mesh renderer stats so we do not increment each section. Bug Fix: Fixed a bug where renaming a parameter lost the default value settings for that parameter. Bug Fix: Static switches are now prevented from receiving drag and drop inputs, because it leaves them in a permanently broken state. Bug Fix: Fixing memory problems with the shader compiler jobs being either leaked or prematurely deleted by using TShadredRefs instead of raw pointers. Bug Fix: Fixed a bug with integer type static switches that had exactly two input values. Bug Fix: Fixed a bug where particle attribute reads could be referenced from the wrong namespace. Bug Fix: Fixed a bug where an actor with a Niagara components on autodestroy could be destroyed before the system was finished. Bug Fix: Fixed a bug where modifying a static switch node could cause the loss of all its parameter metadata. Bug Fix: Fixed the matrix access in the spline data interface. Bug Fix: Fixed a bug where the Static Mesh DI did not return the correct transform for a mesh. Bug Fix: Fixed concurrency problems of the export data DI by moving the callback to a separate task graph call. Bug Fix Making it impossible to make attributes as numeric. Numeric is a generic conversion type, not a physical storage type. Bug Fix Fixing issue where particle vector random instance parameters were incorrectly using VRand, causing unit vector sizing instead of a box for certain assets. A new unit vector sizing option was added to allow data created under the incorrect implementation to have the same behavior. Bug Fix No longer using FNiagaraVariable names in UNiagaraScriptVariable names as periods can lead to bad behavior. Bug Fix Fixes for Niagara vector field resources being accessed on the gamethread without it being necessarily initialized. Bug Fix Fixed bug where killing particles on spawn wasn't being honored. Bug Fix Making texture sampling DataInterface emit an error if used in CPU sims. Bug Fix Fixed if multiple instances of a DI were used in a script, the HLSL translator produced all function combinations for each instance. Bug Fix Fixed world time in level editor viewport. Niagara and Cascade culling behavior that works via the component's LastRenderTime will now function correctly in the Level Editor Viewport. Bug Fix Fixed several issues with Particles.UniqueID. Bug Fix UEdGraphPin BreakAllPinLinks and MakeLinkTo don't by default notify the owning node that changes have happened. In some cases, this means that we don't detect that the graph has changed out from underneath us and so we don't recompile. New: Added missing debug names to some Niagara buffers (makes barrier tracking and debugging easier). New: Disabled interpolation on dynamic parameters for sprites/mesh Cascade vertex factories because they are constant and do not require interpolation. New: Moved GPU and CPU script compilation to the shadercompile worker process. New: Niagara mesh particle rendering: Added locked axis and camera offset to Niagara mesh renderers. Fixed issues with the roll of velocity-facing and camera-facing mesh particles. Niagara mesh particle rendering fixes may cause slight differences in existing velocity-facing or camera-facing Niagara mesh particle emitters. New: The "Fix issue" and "Dismiss issue" buttons in the System Editor Selection panel now wrap to the following line when the error message is longer than the width of the panel. Cut, Delete, Rename, and so on). New: Replacing a deprecated module or dynamic input with the suggested replacement now also renames the module, and tries to retain any dynamic inputs or input values that were set on that module previously. New: Niagara View Options combo button in the Selection panel is now highlighted with an orange background if any options are set to non-defaults. New: Added an Experimental Message field to modules, to enable module creators to enter a reason for why a Niagara script was marked as experimental. New: Added a flag to enable/disable motion blur per Niagara Renderer. Fixed local space motion blur for mesh/sprite particles. Disabled velocities on ribbons, because they were not accurate and making them accurate is a difficult problem due to tessellation. New: Removed deferred deletion from Data Interfaces, as it is not required. Removed TSharedPtr from DI proxy as it is no longer required. New: Added an ability to mark Niagara modules, dynamic input classes and functions as Experimental, which will display an Info icon next to their name in the Selection panel, System Overview node and Parameters panel. New: Added a "Create Asset from This" context menu option to emitters; this will duplicate the selected emitter in a Niagara system and re-parent the emitter to the new asset. New: Emitters now have an Edit section in their context menus. It is now possible to rename an emitter by pressing F2. New: Added a console variable "fx.MaxNiagaraGPUParticlesSpawnPerFrame" to control per frame spawn capacity on the GPU. New: Various fixes to matrix related functionality, some were transposed and some were not: You can now generate a matrix inside Blueprints, pass in the vectors to a Niagara system and have it work as expected on both CPU and GPU. Added MatrixToQuaternion. This uses the fast path library under the hood, but gives us a location to replace once we address matrix access issues inside the VM. New: Added a Deprecation Message to Niagara scripts, to enable module creators to communicate why a module is deprecated and what users can do about it. New: Cycling through issues with the issue button in the Niagara Selection panel now cycles through the specific emitter when multi-selecting emitters. New: Added support for SubUVs on the Niagara Mesh Renderer. Added nointerpolation to various attributes that do not require interpolation. New: Added "Create Duplicate Parent" for Niagara Emitters, which duplicates the selected emitter and reparents it to the newly-created emitter. This allows you to insert parents into the inheritance chain of emitters. New: Moved the Isolate toggle to the Render section in the Niagara emitter node. Added a "Go to Parent" button to emitters that have parents. New: Various memory savings for NiagaraScript - Saves ~33% in a simple test level (713.61kb -> 478.70kb). New: Disabled sorting in Niagara by default on sprite and mesh renderers. The existing system will use the previous default to avoid undesirable behaviour. New: Modules can now be renamed from the Niagara Selection panel. The module will still display its original name in parentheses when renamed. This does not affect compilation of the emitter/system scripts in any way. New: Niagara dynamic inputs now display a tooltip in the Selection panel. New: Added a button to isolate a selected Niagara emitter to the System Overview panel. Emitters that are not shown when the system is isolated are greyed out in the System Overview panel. New: Added a context menu to Niagara emitter nodes in the System Overview panel. New: Added an option to filter the Selection panel to only display modules that have issues. Clicking the issue icon in the Selection panel header now cycles through all the issues in the view. New: Added VertexFactory array per View for Cascade particle systems. This fixes issues for split-screen, where particles are oriented to View 0's camera for example. New: Niagara System Overview nodes now have their issue icon right-aligned. Clicking the icon cycles through modules in the selected emitter that contain issues. New: We now use FriendlyName with Niagara Shader Compile Jobs. New: Niagara now tracks GPU particle memory. New: Niagara now calculates the max number of instances required across all sim passes. This reduces memory pressure when Sequencer is scrubbing the timeline and is forcing many simulation ticks at once, because now it will not progressively increase buffers but instead will do it once up front. New: Niagara's "CPU - Ray Traced" and "GPU - Scene Depth" Collisions' stability has been increased by eliminating surface interpenetration correction using particle teleportation. The Pure Roll component has also been improved to handle arbitrary surface angles. New: Niagara can now optionally show a "real time disabled" warning. New: Niagara parameter Map Get and Map Set nodes now get a drop target highlight when they are hovered over. New: Niagara "Set Variables" modules now display the name of the variable that they are setting, and the number of additional variables changed is in parentheses. New: Added icons to the Emitter Properties row to indicate CPU or GPU sim target. New: Now FNiagaraUpdateContext can split its Destroy and Reinit phase to help in cases such as PreEditChange/PostEditChange where systems are torn down before the change, and reinitialized after the change. New: Added an occlusion query data interface that samples the depth buffer in a circular or rectangular pattern to estimate how much of the sample area is occluded. This can be used to find out how much of a sprite or mesh is visible on screen and either optimize performance or create effects such as lens flares. Since this data interface is using the depth buffer, it only works for GPU particles and not for CPU particles. New: Added a Camera Query data interface that can be used to get common camera properties, such as position, for both CPU and GPU particles. On GPU, it can be used to get advanced properties such as the view transforms. There are some limitations to the camera data interface: When used with splitscreen or stereo rendering, the data interface returns only the info for the first available view to the particle simulation. When combined with a data interface that requires a depth buffer or distance field (such as for collision) then the camera information in the particle simulation will be a frame latent. New: Particle emitter memory allocation is now based on a runtime estimation of the max particle count. Added an option to display current particle count estimation in the editor viewport. New: Optimization of bool from {float,int} comparison dynamic inputs. Added "Less Than" operations for ease of use. New: Update to Niagara SetBoolByFloatComparison module to include "Less Than/Less Than or Equal" choices for ease of use. New: Static switch parameters in the stack view can now be reset to their default value. New: Added an LOD highlight category to give a visual indication that these two nodes work together. New: Swapped Niagara SetBoolByFloatComparison to use Static Switches. New: Fixed derivative calc in PolarToCartestiaCoordinates. New: Added SetBoolByIntComparison dynamic input. New: Modules can now define custom upgrade paths when deprecating old modules. See the Emitter State module for an example. New: Added an Audio Spectrum Niagara Data Interface for creating live audio particle effects. New: Because it is impossible to set a reference to a world within the system editor, all the properties referencing AActors or UActorComponent derived classes are now disabled in the system editor. New: Fixed custom HLSL node to support DataInterfaces. Once you name the Data Interface, you can invoke any supported Data Interface method with a period (.), followed by the function invocation. Niagara detects this behind the scenes and sets up all the necessary connections. Also, now users can rename a pin when creating it. New: Logging for the SetEmitterEnable parameter is not implemented for Niagara components. New: It is now a compile error to have multiple different event reads. New: Added two new pairs of UNiagaraNodeOp nodes for generating integer and floating point random numbers. This is because integer and floating point random numbers are semantically different due to how they handle the upper range of an interval. The old "Random" and "Seeded Random" were inconsistent about this across CPU, GPU and determinism modes. Floating point random number ranges are typically half-open, meaning they do not include the upper limit. Integer random number ranges typically do include the upper limit. This adds the following ops that are consistent with the equivalent Blueprints functions: Random Integer: Non-deterministic, produces numbers between 0 and Max-1. Random Float: Non-deterministic, produces numbers between 0 and Max, but not including Max. Seeded Random Integer: Deterministic, produces numbers between 0 and Max-1. Seeded Random Float: Deterministic, produces numbers between 0 and Max, but not including Max. In addition, it includes two new helper functions, also consistent with the equivalent Blueprints functions: Random Range Float: produces numbers between Min and Max, but not including Max. Random Range Integer: produces numbers between Min and Max. The old Random Range function and the old Random and Seeded Random ops are not touched. New: Adds a new default initialization mode to Niagara variables where you can bind to available compatible parameters using a dropdown menu in the Selected Details panel inside the Script Editor. By selecting a variable in the Parameters panel there is now a new DefaultMode widget that can be "Value", "Binding" or "Custom". When "Value" is selected, it will display an appropriate widget below it to set the default value. A value can only be defined if that variable is referenced in the graph. When "Binding" is selected, it will display a dropdown list of available names to bind. Only built-in names and names explicitly added to the graph or Parameters list will show up. When "Custom'' is selected, it will display neither of the widgets, and indicates that the initialization is done using a sub-graph. This will be streamlined later when the default pins are removed in favor of a separate initialization node. The bindings should be functionally equivalent to the old version of setting the default with a sub-graph and the Begin Defaults node. New: Niagara Parameter Map pins now look like BP exec pins. We will be expanding upon this in a future release where flow control support is added. New: Added support for Materials being set in BP for Sprites and Ribbons that use UMaterial user variables. New: Added the ability for renderers to provide warnings/cues to users. New: Now a warning is displayed when fixed bounds are not set for GPU simulations. New: Niagara script Graph node contents are now being hashed to more reliably identify DDC candidate data. Created Engine\Plugins\FX\Niagara\Shaders\Private\NiagaraShaderVersion.ush that currently works in tandem with FNiagaraCustomVersion::LatestScriptCompileVersion, but is needed for the desired in-editor workflow of making an edit to source files and having it update if a proper console command is set. Added console commands fx.InvalidateCachedScripts and fx.RebuildDirtyScripts, similar to r.InvalidateCachedShaders and recompileshaders changed. New: Info, warning and error messages in the stack view now use different colors. New: Updates Niagara modules, dynamic inputs and functions to use the new explicit RandomRangeFloat and RandomRangeInteger functions from CL # instead of the old RandomRange helper. New: Added an optimization to the VectorVM pattern of acquiring indices and outputting a stream of attribute data. New: Added the Collision Channel enum as a parameter type. New: CPU and GPU Emitters are now differentiated by icon in the System Overview mode. New: Optimized the calculation of the bounds of Niagara CPU particles; an emitter of 100 sprites now has a cost reduction of 40%. New: Fixed up support for GPU mesh emitters that use a mesh with multiple sections. New: Enabled per-particle material parameters for materials used with Niagara systems when targeting GLSL ES3.1. New: Added a set of constant buffers to hold engine-driven Niagara variables, so that we can reduce the cost of managing the parameters through the script execution pipeline. New: Added support for PrimitiveComponent's BoundsScale to NiagaraComponent. New: Location modules now can be optionally masked by SpawnGroup (as defined in Spawn modules such as Spawn Rate or Spawn Burst Instantaneous). This allows a simple form of grouping, and it means that different buckets of particles that are all in one emitter can be easily placed and controlled. New: When script compile errors occur in the system or emitter editor, it now shows the errors in the System Overview and Selection panel. These errors are shown on the module or dynamic input where they occurred, when that information is available. Navigation links are also displayed when that information is available. New: Skeletal Mesh sampling has been rewritten completely to be more full-featured and performant. The old sampling/apply modules have been hidden (but not deprecated) and replaced with Skeletal Mesh Location. The new module is more consistent, easier to use, has more options for which attributes are sampled and written, and is much more lightweight from a performance perspective. New: The Niagara Collision GPU/CPU enum has now been removed from the collision module, because now we can set the value automatically. New: Uniform Ranged Linear Color has been enhanced to give more control over how the color channels are randomly chosen. This enables artists to connect the RGB channels together with a single random value (for example, a random value between black and white would give shades of grey) or to disconnect the channels to drive each channel with its own random value (for example, random values between black and white would give a whole rainbow of colors). New: Changed the default usage bitmask for modules and function scripts to include the particle simulation stage flag. New: Added comments to the Curl Noise Force module, to show how memory usage and performance changes based on the quality of the underlying vector field or function evaluation. New: Spawn modules (Spawn Rate, Spawn Burst Instantaneous) now have an optional spawn probability, which is the chance that the module will generate particles at all during that frame. This allows for more erratic random behavior in spawning without the need for complex logic chains in the Spawn Count inputs. New: Scale Sprite Size and Scale Sprite Size by Speed modules now accumulate a transient scale factor. This enables multiple scale modules to properly accumulate scale offsets, and also enables them to function with sprite size scaling at the LOD level. New: Niagara Graph nodes now retain invalid (no longer exposed) pins which have non-default values or links to other nodes. These invalid pins turn red, and will generate compile warnings which are visible in the emitter and system editors. The warnings also provide navigation links to find and fix the invalid pins. New: The Apply Initial Forces module can now apply forces placed in a Spawn module as if the time was less than 0. This enables users to break up initial spawn positions using forces such as curl noise, wind, or random vector offsets. New:. New: Niagara ribbons now preserve multi-ribbon ordering to prevent random flickering when the camera moves around. Disabled multi-ribbon ordering when using opaque materials. Fixed random flickering when using multi-ribbon in Niagara. New: Added FGPUSortManager that handles different GPU sort tasks. The current clients for it are Cascade and Niagara. Deprecated: Set volumetric scattering default for light renderer to 0 (same as Cascade). Deprecated: Removed fast path implementation from Niagara. This feature was only visible by setting a cvar, so this deprecation should not affect any users. Lighting Bug Fix: Applied a fix for rendering both Capsule Indirect Shadows and SSAO in the Foward renderer. Previously the SSAO would overwrite the results of the capsule shadows. Bug Fix: Fixed Virtual Texture encoding of BC4 textures which fixes lightmap AO material mask when using VT lightmaps. Bug Fix: Fixed the specular highlight in lighting-only mode that was accidentally introduced in the last release. Improvement: Updated GTAO to improve spatial and temporal filters to bleed better across discontinuities. Added a Thickness Heuristic to bias AO around smaller objects that can be controlled using "r.GTAO.ThicknessBlend." Also added console variables to control the falloff so it is not hardwired which can be controlled using "r.GTAOFalloffEnd" and "r.GTAO.FalloffRatio." Materials Crash Fix: Removed various UMaterialExpression::NeedsLoadForClient impl, and instead it always returns true for UMaterialExpression::IsEditorOnly, as we no longer want any expressions in a non-editor build. MaterialCachedData now checks for nullptr expressions and avoids updating if any are encountered. This fixes a crash occurring with the editor loading cooked/non-editor data. Bug Fix: Fixed an issue in shader complexity that prevented materials using depth pixel offset from rendering. Bug Fix: Fixed an issue with SSS specular when checkerboard is off. Bug Fix: Fixed an issue with the scene color node in translucent shaders. Bug Fix: Scalar type now correctly propagates through a vertex interpolator material node. Bug Fix: Fixed a race condition that resulted in duplicate shadermaps for material instances. Bug Fix: Made "GDefaultMaterialParameterCollectionInstances" a TMultiMap instead of TMap. This makes it possible for the same MPC to be loaded multiple times. If this happens, it will get added to this map multiple times. If it's not a multimap, the first instance that's destroyed will remove the ID from the map, which will cause further lookups against the map to fail. Bug Fix: Update the runtime Virtual Texture contents after any relevant Material Instance parameter change. Bug Fix: Applied a fix for Deferred Decals after base pass not being rendered when GTAO is enabled. This was due to them not being registered in the render graph as a dependency when rendering the GTAO. Bug Fix: Updated "CopyMaterialUniformParametersInternal" to work without requiring a FMaterialResource. Bug Fix: World Position Offset can be affected by not only regularly connected nodes (Current WPO), but also using Previous Frame Switch nodes (Prev WPO). Take that into account when setting compilation output flags. Bug Fix Prevent forcing low or medium quality usage for platforms that cannot lower the shader quality. This reduces the number of shaders to compile. Bug Fix Fixed unnecessary recompilation of the material when simply clicking on its textual and numeric parameters. Bug Fix Fixed the "Failed to find material attribute, PropertyType: 28" warnings happening during the cooking process. New: Added Burley sample override for offline rendering. New: Allowing modulate translucency to render into the post Depth of Field pass with Dual Blending. New: Allow additional defines and include macros for custom nodes. New: Added a framework to allow user to override the streamed mips of UTexture2D Assets. This allows custom implementations to fill in mip data (from other sources than the cooked mips). Users must derive a class from UTextureMipDataProviderFactory and add it to the UTexture's "AssetUserData." This one has to allocate a custom implementation of FTextureMipDataProvider to fill the streamed mips with its own strategy. New: Default to World Space Normals when writing to Runtime Virtual Texture through standard material attributes. New: Added optional World Position Offset input to Runtime Virtual Texture Sample node. This allows us to manipulate sample coordinates. One use case is to sample using the world position from before material World Position Offset is applied. New: Added Blueprint API for setting layer parameters on Material Instance Dynamics. New: Added texture group TEXTUREGROUP_Project11 to TEXTUREGROUP_Project15. New: Added support for moving Translucency Pass to before water rendering depending on if the camera is above or below the water surface. This introduces a water depth state on the scene view making it now possible to, for example, call IsUnderwater() on a view. New: Write output hash as part of the shader debug info. New: Optimize getting the material parameters for the UI speeding up editing "master" materials with a large number of parameters. New: Sharing material shader code is now the default. New: Added support for the shader pipelines in the shader code library's stable map. Improvement: Minor time savings during shader compilation by not including the preprocess defines. Mobile Rendering Crash Fix: Fixed an assert in PreparePrimitiveUniformBuffer with Skeletal Meshes when ES3.1 Preview mode is enabled. Crash Fix: Fixed a crash related to Reflection Captures when running DirectX mobile emulation. Bug Fix: Fixed HQreflection on mobile. The first empty spot in the ReflectionCubeMapTextures (if there is any) will be taken by the Sky. Bug Fix: Updated Graphics Resources for Indirect Commands; this fixes Niagara for OpenGL. Bug Fix: Fixed roughness clamp causing IBL mip0 never to be used. Bug Fix: Try to update all PassUniformBuffers before start rendering to avoid GPU flush. Bug Fix: Fixed a precision issue at mobile base pass pixel shader. Bug Fix: Fixed an issue with Spot Lights not working in the mobile renderer when not enabling cast shadow. Bug Fix: Fixed a problem that movable objects never get reflection captures updated in Forward renderer. Bug Fix: Fixed a Software Occlusion issue that Landscape does not generate correct VB and IB data for multi-sections components. Bug Fix: Added an option to promote 16bit index buffers to 32bit on load, which is needed for some Android devices (Mali T8X series). Activate it by setting "r.Android.MaliT8Bug" to 1. Bug Fix: Fixed a mismatched depth/stencil buffer access type when adding a scenecapture2d in the scene on mobile preview. Bug Fix: Niagara GpuSim fixed for Android. Disabled overlap compute on Android. Workaround for Mali Compiler bug that assumes a texture index is warp invariant. Bug Fix: Moved mobileDepthPrepass Uniform buffer update to the start of the frame. Bug Fix: Fixed an issue where the global shader is not recompiled after turning MobileHDR on or off. Bug Fix: Fixed an issue with "Render in main pass" component option now works correctly on mobile. Crash Fix: Resolved occasional crashes on iOS devices while using scene captures. Crash Fix: Fixed a crash in Post Process Visualize Complexity when in ES3.1 preview mode. Bug Fix: Disabled Vulkan support for Android devices with Mali-G72 GPUs that use Android 8 or older version. Those devices can't create a PSO with a compute shader that uses texel buffers. Bug Fix: A8 textures are not writable in Metal. This change switches the backing to R8 and leverages Texture2DSample_A8 so we sample the correct channel. Bug Fix: Fixed a small issue that local light's diffuse color was not correctly cosine-weighted. New: The texture buffer is greater than the 2D texture for GPU Scene Mobile devices. This was done because of the MALI GPU limitation of 64kb texture buffer. The default precision of compute shaders to high precision was done because of the MALI GPU limitation of 64kb texture buffer. The GPUSceneUseTexture2D was changed requiring a recompile of all shaders. New: Android projects will now use SensorLandscape as orientation and Immersive full-screen options by default. Previous engine versions were using Landscape and non-Immersive as defaults. If you would still prefer that setting, you can change it back from the project settings. New: Added support for light shafts with MSAA on iOS. Makes all post process passes, including post process materials, work properly with MSAA on iOS. New: Niagara GPUSim iOS/Android glGenVertexArrays, glBindVertexArray, glMapBufferRange, glCopyBufferSubData, glDrawArraysIndirect, glDrawElementsIndirect added One VAO bound per context now (different from Default VAO - indirect ES requirement). New: Added an option on Materials for Forward shading (including mobile) to use EnvBRDF instead of EnvBRDFApprox to have a better IBL effect. New: ImageBasedReflectionLighting now keeps coherence between mobile and PC. New: LightMapPolicyType now keeps selection coherence between mobile and PC. New: Added particle related material properties for Position, Time, Direction so that they can now be used on Mobile. New: Emulated uniform buffers will now be used by default on OpenGL ES3.1. These buffers significantly reduce shader memory usage and often improve rendering performance slightly. You can revert to old behavior by adding "OpenGL.UseEmulatedUBs=0" into project DefaultEngine.ini file. New: Corrected the Stationary Sky Light contribution for two sided primitives. New: Fixed an issue of incorrect indirect lighting on movable objects because of the mismatch of SHCoefficients. New: Added support for Post Process Material stencil test on mobile platform based on the new mobile custom depth implementation. New: Changed mobile custom stencil format from B8G8R8A8 to G8. This saves depth to the R16F color target and uses less memory for the depth target to reduce memory cost on mobile platforms. Added a console variable for using half-res custom depth. New: Hardware occlusion queries will now be enabled by default on Mobile platforms. Occlusion queries can be disabled on a specific mobile device by adding "r.AllowOcclusionQueries=0" to a corresponding platform's device profile. New: Refactored GpuScene & UniformBuffer to unify mobile and console code paths. New: Added ClearCoat shading model for mobile renderer. New: Refactored the MobileBasepassPixelShader to make further shading model extension to be more easy and maintainable. New: Saved some bandwidth for mobile post-processing by splitting RT from R16FG16FB16FA16F into R11G11B10 and R16F. New: Implemented Eye Adaptation for mobile. New: Added support for Eye Adaptation Lookup from material graph for mobile. New: Added Android Desktop Forward Rendering (Experimental). New: Fixed RandomSeed difference seen on different devices Function argument order evaluation is undefined. New: Added 3x3 PCF shadows for mobile and support for texGather on ES 3.1 New: Added experimental support for Virtual Textures on mobile platforms. It requires IPhone8 or newer iOS devices, and preferably Vulkan on Android devices. Runtime Virtual Texture compression is not implemented yet. VT on mobile is disabled by default and should be enabled in the project settings. Improvements: Improvements to Mobile Custom Depth: Added a console variable "r.Mobile.CustomDepthDownSample" which if set to 1, will use a half-resolution custom depth texture. Changed stencil format to G8, and depth format to R16F, both memoryless color targets to reduce memory cost on mobile platforms. Deprecated: Removed ES2 support. See blog post on for more information. Deprecated: Removed ATC/ETC1 and Android PVR texture formats. Optimizations New: Replaced flipping "gl_Position.y" coordinate in HLSLcc by flipping viewports in Vulkan RHI (uses VK_KHR_maintenance1 extension). Primarily used to simplify the consumption of SPIR-V modules generated by DXC. New: Added support for UAV overlap on Intel and renamed the console variable "r.D3D11.NVAutoFlushUAV" to "r.D3D11.AutoFlushUAV." New: Fixed possible CPU stalls when using the GPU readback framework. New: Use "TSherwoodMap/Set" instead of "TMap/TSet" for resource residency tracking and descriptor table deduplication in DXR implementation. New: Added low quality setting for Distance Field shadow (uses 20 steps, no SSS). New: Added runtime Distance Field GPU downsampling. Post Processing Crash Fix: Fixed a crash in shader complexity pass when visualizing complexity in a cooked build. Bug Fix: Added a fix for bloom application in split screen. The "PostProcessTonemap.usf" now properly accounts for a restricted ViewRect on the Bloom input texture. Bug Fix: Applied a fix for debug draw pass using MSAA depth instead of resolved depth. Bug Fix: Fixed an issue with bloom being sampled incorrectly in tonemap pass. Bug Fix: Fixed the visualizer for motion blur to accept the last pass override. New: Changing default Exposure Bias for Auto Exposure to 1.0. New: Locking eye adaptation exposure to 1.0 when auto-exposure is disabled. New: Added ShaderComplexity support in cooked content. This requires the console variable "r.ShaderComplexity.CacheShaders=true" to be set in your .ini files. Tools Bug Fix: Passed through the DepthPriority parameter in PrimitiveDrawingUtils instead of hardcoded value. Tools Crash Fix: Fixed a crash in CrashReporterClientEditor occurring if the user closed the windows before the call stack was resolved, preventing the crash reporter from sending the crash report. Bug Fix: Fixed FArchive not properly writing when byte swapping is enabled. The written value was swapped after being written and the swapped value was returned to the user. Bug Fix: Fixed FString not swapping the bytes when the archive was written byte swapped. Bug Fix: Fixed an issue where the HLOD Outliner would stop updating while Play-in-Editor was running. Bug Fix: Fixed the HLOD MergeSetting for "LOD Selection Type" that was being overridden on load when saved to level instead of Blueprint Asset. Bug Fix: Fixed DMX Gobo Rotation effect and added a texture mask. Bug Fix: Actions are no longer posted to the right-click menu for the Curve Editor if the curve/key's parent curve is set to "read-only." Bug Fix: Fixed an issue when importing a Skeletal Mesh that would sometimes cause the material slots to not be properly assigned. Bug Fix: Renamed "UnrealDisasterRecoveryService" as "UnrealRecoverySvc" to shorten several build/install paths on Windows. Bug Fix: Fixed an issue with SN-DBS compilation with precompiled headers. This change restores generation of the dummy include file of PCHs. Bug Fix: Applied fixes for various custom debugger object views (from natvis). New: VTune 2019 ITT Notify library update. New: Enabled verbose output for SN-DBS code builds. This has been enabled by default to catch issues as they arise so that supported platforms can compile cleanly. New: Changed the default minimum scope time for framepro captures from 25us to 50us. This is the default for framepro windows client live captures. It significantly reduces hitching from scope "FramePro Start Frame" when recording captures. New: Added new DMX Fixture Content and removing old DMX Fixture content. New: Added a new UDP Messaging option to change the message format used to encode UDP messages. The choices are "CBOR (Standard Endianness)" and "CBOR (Platform Endianness)". New: Added a "Quit" menu along with the "CMD + Q" shortcut to close the Multi-User server window on Mac. New: Added a choice to the HLOD Outliner for "No HLOD" in the HLOD level dropdown menu. New: Added "Do not resave _BuildData.assets" in the HLOD rebuild commandlet as they aren't dirty. API Change: This change list added a minor feature along with 2 bug fixes discovered while adding the feature. AutomationTool New: UAT: Change the default "DeployFolder" name to be "ShortProjectName-RootFolderName-UserName" truncated to 50 characters. This will deploy the same project from different streams, or users, to different workspaces. And, use the "DeployFolder" instead of the "ShortProjectName" for all platforms. The existing "-deploy=DeployFolder" argument can be used to specify any custom workspace name. API Change: Make sure to use the new "DeployFolder" location in any command line parameters or debugger options when launching a deployed game. UnrealVS New: Added "TArray<*,FMemoryImageAllocator>" and "TMemoryImagePtr<>" visualizers. Bug Fix: Fixed the project file generator to put the proper Engine PlatformExtension directory in the solution. Virtual Production 3DText New: Switched to UStaticMesh-based components. Characters are now instanced meshes, allowing faster text generation and transformation-based animations. Characters are now cached at runtime for reuse. Added text animation tools using Text3DCharacterTransform, which allows position, scale and rotation-based character animations. Optimized speed of Kerning, Line Space, and Word Space. Removed third-party libraries (FTGL, GLU-Tessellate). New: Added a plugin that can create 3D charts based on data tables. The plugin currently provides 3 types of charts: Bar, Pie, and Line. All charts are fully customizable and support showing and hiding animations that can be triggered through an event. The plugin is still in beta. Composure New: Added the two-pass Color Difference Keyer to Composure, replacing the RGBtoHSV Material function and updated all references to it. New: Added a new keyer based on color channel difference that provides the user with better results, especially when keying hair or transparent objects. Materials include additional functions, such as Alpha Erosion and Despill. Materials are structured to support a single-pass setup for Composite Planes and a two-pass setup for Composure. LiveLink Bug Fix: Fixed a bug when interpolating SceneTime from LiveLink frames to now use linear interpolation. Bug Fix: Fixed incorrect background color of LiveLinkClientPanel when docked. Bug Fix: Fixed default LiveLink preset to load when depending on other modules. Moved to PostEngineInit instead. Bug Fix: Initialized the LiveLink source setting when connecting via Blueprint. Bug Fix: Changed the name of LiveLinkMessageBusDiscoveryManager thread. New: Changed how MobuLiveLinkPlugin reads reference time, so that it reads value set by reference time provider without adjusting for system time difference. New: In Create LiveLink Timecode provider, added GetLiveLinkTime to LiveLinkBaseFrameData. Renamed FOnLiveLinkSubjectFrameDataReceived to FOnLiveLinkSubjectFrameDataAdded. New: [LiveLinkComponent] Subject Role was split into a hierarchy, in which each role class is assigned a controller. Users can now select a specific controller for a role. For each role class, Project settings were added to set a default controller. New: Added a picker for LiveLinkSubjectKey. New: Added support for multiple virtual subject sources to provide better extension flexibility. New: Added a new LiveLink Axis Switch option to switch the translation unit. New: A LiveLink Preset can now be loaded from a command. Miscellaneous New: Added a plugin that allows a cine camera to project textures and videos on selected objects. An alpha can be used with textures as a cut-out. An alpha can be generated using the new color channel difference keyer. The plugin is still in beta. nDisplay Crash Fix: Fixed cluster event packing from a slave node that could lead the engine to crash during a second Play in Editor session. JSON data was being generated in a wrong way. Crash Fix: Fixed a crash when a non-nDisplay StereoRendering device is used with nDisplay active. Crash Fix: Fixed a bug when nDisplay is used with instanced stereo enabled that could crash the engine. Crash Fix: nDisplay sync components now unsubscribe from the replication process on EndPlay. This fixed some unexpected issues and crashes. Bug Fix: Fixed the 32K limit in networking. The networking buffer size for nDisplay internal communication and cluster events is now a UINT (32 bits). Bug Fix: Fixed a bug where DisplayClusterRootActor could not be found when located in a sublevel. Bug Fix: Fixed synchronization issue. All replication data on a master node is now acquired on game thread. Bug Fix: Fixed several issues with receiving/emitting nDisplay cluster events during Play In Editor. New: The nDisplay Master node will now send its MessageBus interceptor settings to the cluster so all nodes use the same setup and avoid multi-user synchronization bugs. New: Added the LiveLinkOverNDisplay plugin, which provides synchronized live link data across an nDisplay cluster. New: Added mesh-based warp rendering. Added Chroma Key markers, allowing continuity rendering over multiple cluster nodes. New: Created a new Actor type, DisplayClusterTestPatternsActor. It enables us to use both console and cluster events to control calibration patterns appearance. Supports custom patterns. New: Integrated NVIDIA hardware-based Frame Lock and Swap Sync for Quadro and RTX Pro cards. New: Added the option to assign an nDisplay instance to an available GPU from the configuration file. New: Added a temporary warning display if a config path has spaces. nDisplay config files that have spaces in the path are temporarily disabled. New: RootNode is now used as the default if a parent Component is not found for simple projection policy. If a screen for simple projection policy has a non-existent parent node, the nDisplay origin will be used. New: Added Three-Layer Composition changes for In-Camera VFX, including the ability to ignore Actors for In-Camera VFX rendering. *ake Recorder Bug Fix: Recorded data in Take Recorder is now smoother. Timecode Bug Fix: Updated code to directly access FApp::GetTimecode instead of using the engine's timecode provider. Bug Fix: Updated TimecodeSynchronizer to use the FApp flags instead of the GEngine flag. New: TimedDataMonitor plugin was added. This adds a new way to configure and monitor different Timed Data Inputs such as LiveLink and MediaIO data. This adds interfaces a user can extend from so its data is automatically picked up by the monitor. It gives the user one central place to visualize different data sources alignment compared to the engine's evaluation point. This tool is meant to be used first before considering using TimecodeSynchronizer. New: System timecode can now generate subframes. New: A SystemTimecodeProvider can now be created when requested. Let the user change the CustomTimeStep and TimecodeProvider in the Project settings. New: Added SubFrame delay for TC Provider and LiveLink. New: Replaced FTimecode with FQualifiedFrameTime in FApp. If there is no TCProvider, you can now use TOptional in Fapp to invalidate the frame time. Remove FTimecode for maths in LiveLink and nDisplay, only use timecode for display. You can now use FTimecode::IsDropFormatTimecodeSupported to convert from FrameTime to Timecode. You can now generate a default timecode value when no timecode is set. By default it's enabled and the framerate is 24 fps. Added a CVar that sets the engine in DropFrame or NonDropFrame when the timecode is 29.97 or 59.94. Removed the Sequencer option to select between the two. New: You can now set a frame delay in the Timecode panel. New: Added the option to report dropped frames with Blackmagic Custom Time Step. New: Added support for fractional timecode to the ARKit face protocol. USD Crash Fix: Fixed a bug when reading normals that were not varying per vertex on a USD geometry Actor. Bug Fix: USD instances are now processed correctly when using the Import Into Level command. New: The USD primitive Details panel now lists all attributes of the selected primitive and their values. New: Added support for USD MetersPerUnit metadata when loading a USD Stage through the USD Stage Actor. New: Universal Scene Description (USD) Added auto-collapsing of all Geometry Actors under a Component type to achieve better performance. Reloading a USD Stage will be faster as derived data generated from USD assets are now stored in the derived data cache. Static meshes generated from USD primitives now have a physics body and VR scouting can be used to interact with them. New: Added USD support for the Linux editor. The editor must be compiled with run-time type information (RTTI) enabled. New: Extended the USD schemas translation framework to allow overriding a schema translator and adding support for custom USD schemas. New: Added support for USD Point Instancers through the USD Stage actor. A point instancer is converted as a HISM component. New: Multi-threaded the creation of the Static Meshes when loading a USD Stage, drastically reducing loading time. New: Materials added to Actors spawned by the USD Stage will now be saved as custom Assets in the current USD edit target. When USD Stage reload, any custom UE4 Materials found in the USD data will supersede other USD materials. This behavior allows the user to set up the look in the engine using Unreal Material assets. Video IO Crash Fix: Fixed a bug the could cause the engine to crash when trying to use the HAP video codec with DX12 Crash Fix: A bug was fixed so that inputting a negative size in the Media Capture panel could no longer cause the engine to crash. Bug Fix: Fixed incorrect category name for Avid DNxHD encoding. Bug Fix: - Fixed AJA output timecode overriding timecodes of other channels. Bug Fix: When outputting with AJA, a ready frame is no longer used when no frame is available. Frame capture is skipped when the gameplay thread is not executing. Bug Fix: When outputting with AJA, the correct number of drop frames are now reported. Bug Fix: The AJA Timecode provider frame rate is now capped to 30 when read from the reference pin. Bug Fix: Fixed how the engine detects if an AJA card is already used, to better handle when Unreal Engine 4 crashes and leaves the card in an unpredictable state. Bug Fix: Fixed InlineEditCondition for FileMediaOutput. New: Added the AJA plugin. New: Updated the AJA SDK to 15.5.1.2. New: Added the AJA Auto-Detect button. New: Added the Blackmagic Media plugin. New: Updated MediaIO to work with Timed Data Monitor. Disabled LiveLink' ValidEngineTime and ValidTimecodeFrame setting by default. Added the option to set the LiveLink MessageBusSource to Timecode by default. New: When using an AJA media card, the name of the device app type being used can now be printed. Virtual Camera Bug Fix: Fixed log spam when using a custom tracker in VirtualCamera. Bug Fix: Fixed incorrect string comparison that stops RemoteSession from finding the correct channel. VirtualProduction Utilities Bug Fix: Teleport to bookmark now takes the headset offset into account. Bug Fix: Fixed flying feature in VR scouting. Replaced action type string TrackpadPositionX and TrackpadPositionY with TrackpadPosition_X and TrackpadPosition_Y in related Blueprints. Bug Fix: Fixed camera grabbing when snapping is on. Bug Fix: Fixed crashing when in VR and Multi-User, where other machines may crash when one starts VR Mode. New: You can now create an Actor that controls how to display Unreal Motion Graphics in fullscreen in different game modes. New: Added Color Correction Volumes/Regions to the VirtualProductionUtilities plugin. These are used to blend real sets with LED wall extensions. Three-dimensional shapes, such as spheres, cylinders, and cones, can color correct pixels located inside of them, with an editable fade-out region. XR New: Reorganized XR rendering options. Single-pass stereo options are now consolidated. Multi-View Direct is now automatically used for supported plugins when targeting Mobile. Multi-View is now automatically used when Instanced Stereo is enabled. Mobile HDR has been migrated to an XR rendering option and has to be disabled for Mobile Multi-View. Improvement: Improved logging for remote connection failures. Improvement: Scene capture and reflection capture in VR Mode use Grow rather than Resize in select cases to improve performance. Removed: Leap Motion plugin removed from engine as it is no longer supported. HoloLens Crash Fix: The camera can be explicitly turned on or off at runtime, and defaults to off. This fixes a crash that only shows in newer HoloLens flashes and generally reduces power consumption for all HoloLens apps. Bug Fix: HoloLens now uses its device profile rather than creating a default device profile. Bug Fix: Fixed a crash while changing the IP address of the remote device. Bug Fix: Added a warning if developers run with the wrong XRSystem active and trying to remote to HoloLens. Bug Fix: Updated default minimum Windows 10 SDK for HoloLens packaging to version 10.0.17763.0. Bug Fix: Fixed issues that would cause packaging plugins on HoloLens to fail due to incorrect architecture string. Bug Fix: Interop projects now handle spaces in the path for custom build step. New: Added support for mobile multi-view on Hololens New: Exposed HandMesh for HoloLens. New: Updated to latest HoloLens QR tracker plugin. New: Exposed controller and hand-tracking status to WMRFunctionLibrary. New: Implemented remoting from game. New: Displays remoting connection status (connecting, connected, disconnected) in remoting settings window New: Added webcam support for HoloLens. New: Exposed hand state to C++. New: Exposed hand joint radius so that a pointer can appear on the tip of the finger instead of the middle. New: Returns a boolean whether hand tracking is supported. New: Enabled HoloLens support for Microsoft Spatial Audio New: Added HoloLens third camera MRC support. New: Moved HoloLens input sim injection for controller tracking status to the HMD class. The SpatialInput class only implements the IMotionController interface, but does not provide input sim data through the WMR function library. The function library directly calls the HMD function, so implementing it on that level covers both code paths. Improvement: Changed failure reason on HoloLens Remoting connect logs from a numerical int to the enum value name. AR Bug Fix: Fixed a typo in a log message for attempting to start an AR session without a session config object. Bug Fix: Fixed issue with converting UnrealTargetPlatform to string when parsing command line options Added a TypeConverter for UnrealTargetPlatform. New: Added check if a typeconverter exists when parsing command line arguments ARKit is reserved for testing purposes Magic Leap Crash Fix: Fixed vertex normal calculation for Magic Leap hand meshing, as using incorrect normals was crashing on low poly meshes. Crash Fix: UpdateTrackedGeometry is called in FLuminARTrackedPointResource::UpdateGeometryData even if we couldn't get a valid position and orientation to update to. This addresses a crash that occurs within certain Blueprint functions (GetLocalToWorldTransform, for example) if Point does not have a valid ARSystem pointer during scene transitions. Bug Fix: Added hooks into input focus callbacks for Lumin Bug Fix: Fixed an error where legacy button code would send home button taps twice, resulting in behavior tied to home tap toggling on and off immediately. Bug Fix: Fixed hand mesh winding order. Bug Fix: Exposed mic mute state getter and setter. Bug Fix: Fixed bugs with flushing audio buffer when device goes in standby. Bug Fix: Updated deprecated functions in MLAudio and MagicLeapAudioCapture Bug Fix: Fixed multiple input tracker creation and disabling mouse / keyboard input during VRPreview. Bug Fix: Export Vulkan Command Wrappers FWrapLayer methods imported for Magic Leap and OpenXR plugins to avoid build breaks when VULKAN_ENABLE_IMAGE_TRACKING_LAYER is enabled. Bug Fix: Fixed AndroidRelativeToAbsolutePath() for Lumin. Bug Fix: Depth target is no longer rendered to MLGraphics in Zero Iteration. Only reallocate the depth target if the requested size matches the expected size. This workaround is required because this depth allocation function can receive requests for targets other than the one for eye stereo render, such as when using a SceneCaptureComponent. Bug Fix: Fixed ImageTracking not working on consecutive VRPreview launches. Bigfix: Fixed an issue preventing light estimation from working when ARSession is restarted on the device. Bug Fix: Fixed a Vulkan validation layer error for missed layout transition. Bug Fix: Fixed planes not working on consecutive VRPreview runs when using the AugmentedReality interface. Bug Fix: Added a workaround fix MRMesh flickering on Vk desktop renderer.: Fixed flickering caused by garbage data being used for the bounds in a frustum/bounds visibility check. Bug Fix: During tangent generation, normals are now generated by adding the normal of each triangle that a vertex is part of and then normalizing the result. This removes the need to add a separate iteration over the mesh just to do normalization. Bug Fix: Hands now support one-sided materials. Bug Fix: Reset worker thread setup for image tracking so it restarts on consecutive VRPreviews. Bug Fix: Fixed file permissions on Mac binary builds to replace existing artifacts for lumin. Bug Fix: Added a missing config property to allow developers to manually call the lifecycle set ready notification function. Bug Fix: Re-install app on device if it is uninstalled outside of Unreal context during consecutive LaunchOn iterative deployments. New: Defined a PLATFORM_SUPPORTS_VULKAN preprocessor flag for Magic Leap. New: Exposed function to indicate if the user is holding a controller in hand. New: Added some helper methods to convert between motion sources and hand keypoint enums. New: Added events in MagicLeapImageTracker to notify when a target has been added in the backend, exposing all functionality via direct C++ calls in the module. New: Added PLATFORM_SUPPORTS_VULKAN in MagicLeapOpenGL module. New: Updated Magic Leap SDK to .23 - includes changes to support new externally-owned Vulkan semaphores. New: Implement setting Lumin thread affinities via a config file. New: Added MagicLeapConnections plugin. New: Updated the MagicLeapContacts plugin. New: Update privileges for the 0.23 Magic Leap SDK. New: Moved MagicLeapARPin interface into its own plugin called MagicLeapPassableWorld. This is re-implemented as a ModularFeature and will be used in the Magicverse plugins as well on iOS and Android. The MagicLeapPassableWorld also contains MagicLeapSharedWorld helpers. New: Pass depth buffer to MLGraphics Improve frame synchronization in MagicLeapMetalCustomPresent. New: Implement LocationServicesBPLibrary interface for MagicLeapLocation plugin. New: Update privilege management in AppEventHandler, update available privileges in the 0.23 SDK, and some cosmetic updates to the MagicLeapMedia plugin. New: Refactored MagicLeapAR module - fold FLuminARFrame, FLuminARSession and FLuminARDevice into FLuminARImplementation. New: Implemented LightEstiamtion, ImageTracking in MagicLeapAR. This requires small updates to the AugmentedReality interface itself. New: Added the new MagicLeapMovement plugin, and updated deprecated function usage in MagicLeapMusicService plugin. New: Added Support for localized app names and icons. New: Magic Leap: Implement persistent planes queries to MagicLeapPlanes. Force request LocalAreaNetwork privilege on Lumin if a map travel is requested via the command line New: Magic Leap: Add MagicLeapCVCamera module New: Add support for debugging Blueprint-only projects on Lumin via Visual Studio. New: Updated default device profiles and runtime settings config for Lumin New: Exposed input focus events in the LuminApplicationLifecycleComponent. New: Expose raw perf numbers given by the MLGraphics api. New: Added functions for querying platform API level. New: Add 'MagicLeap' prefix to MeshTrackerComponent. New: Added Blueprint callable function to set whether to use triangle weights when calculating the vertex normals. False by default, as this is slightly more expensive and doesn't have a huge impact. New:. Improvement: Supplied a box based on the HMD position and orientation. Removed: Deleted unused TextureReader class. Removed: Removed GetTrackableHandleMap from the ARTrackingSystem interface. Instead, a new function RemoveHandleIf is used to set the StoppedTracking state and remove the handle. Removed: Removed FLuminFileMapper, as it was returning the same names without case changes. Removed: Removed deprecated fixation comfort api from MagicLeapEyeTracker. Removed: Removed explicit caching and re-sending of the current mesh data. This is not required because MRMesh keeps a local cache of the current data to be rendered. We only need to send new data when we want to update the mesh. VR Crash Fix: Fixed resource transition with SteamVR on Vulkan to avoid an assert on SteamVR startup. Crash Fix: Addressed a crash that occurs during Ovr Avatar shutdown due to delay load helper throwing an exception when the module has been unloaded. Crash Fix: Fixed a crash that occurs during Oculus room invites Crash Fix: Fixed an assert on ending PIE or closing an app using D3D12 and SteamVR. This also fixes other potential issues around lifetime with D3D12 and Vulkan textures with the XrSwapChain class and resource aliasing. Crash Fix: Fixed out-of-order shutdown crashes with Oculus Avatar plugin. Bug Fix: Fixed a bug preventing developers from mapping actions to the Oculus Touch's menu button. Bug Fix: Switched XrSwapchain and SteamVR to use the new RHICreateAliasedTexture API. This prevents a redundant/dummy texture creation during initialization. Bug Fix: Added D3D12 renderbridge implementation, including a SteamVR bUseExplicitTimingMode flag to ensure thread-safety in DX12. Bug Fix: Fixed Vulkan timer query buffer overflow. Bug Fix: Removed redundant swapchain re-creation when switching modes (Example: VRMode/PIE and back). Bug Fix: Addressed an issue where fragment density map in Vulkan (when using Vulkan foveation) is added in the renderpass, but not in the renderpass attached to the graphics pipeline. Bug Fix: Fixed a bug where default bindings weren't correctly generated for SteamVR if the manifest file didn't exist, causing packaged games to have no bindings and motion controllers not to be tracked. Bug Fix: Fixed hitching that occurs on Oculus when a level contains uninitialized or hidden stereo layers when entering VR preview. New: Implemented SteamVR high-quality stereo layers by implementing a plugin-specific stereo layer shape. New: Updated OVRPlugin to version 1.45. New: Implemented Oculus Audio update, adding support for Arm64 on Quest. Windows Mixed Reality Bug Fix: Changed status text delegate binding from BindRaw to BindSP to fix an occasional AV on ExecuteIfBound. Bug Fix: Updated status text to use a non-deprecated SetText override. Bug Fix: Disabled rendering when headpose is lost. In these cases the system dialog takes over. Bug Fix: Fixed audio cutting out when VRPreview window is not in focus. Bug Fix: Fixed use of finger transform buffer that will be freed rather than the local copy we make for one hand. Bug Fix: Fixed a bug in SpatialStageFrameOfReference that would cause camera height to decrease when tracking is lost. New: Implemented simulation for button press state in the Windows Mixed Reality HMD. New: Implemented new input simulation module for the WindowsMixedReality plugin. This module provides an engine subsystem that stores generated data for replacing device input in case no HMD is connected. Programming Deprecated: The MeshDescriptionOperation module has been deprecated. It's utility functions are being moved over to the StaticMeshDescription module. Bug Fix: Removed the duplicated object name from output paths when performing an Advanced Copy operation. Bug Fix: FXmlFile now supports the ">" character within attribute strings. It doesn't need to be escaped in a valid xml file. New: Static Meshes and cached Mesh Draw Commands are now generated in parallel. Watch out for any race conditions that might arise in code that is called downstream. Upgrade Notes Animation Animation Blueprints Changed AnimNode Layer Blend Curve Options to: Override DoNotOverride NormalizeByWeight BlendByWeight UseBasePose UseMaxValue UseMinValue "MaxWeight" is now called "Override". Blueprints Merged UK2Node_LatentOnlineCall into UK2Node_AsyncAction, and moved it from the Kismet module into BlueprintGraph. If you have a custom K2 node derived from either you may need to update your module dependencies or remove your subclass and add a redirector to your DefaultEngine.ini. Core UProperty This is an API breaking change. Converted UPropertiesto FProperties. This means that internal Engine objects that represent member variables of UClasseswill no longer be UObjectsthemselves. This results in memory overhead reduction (123 bytes per property on average) and improved performance when constructing, destroying property objects, when collecting garbage and iterating over all objects. Property casts and property iteration is now faster too. U*Propertyclasses have been renamed to F*Propertyclasses so all references to UProperties in your project code should be updated FPropertiesuse different Cast functions ( CastField/ CastFieldChecked) so all property casts should be updated in your project game code. UPROPERTY() UProperty* MemberVariable;declarations should be replaced with UPROPERTY() TFieldPath<FProperty> MemberVariable;. It still is a Garbage Collector exposed hard reference with PendingKillsupport (becomes null if the owner UStructis PendingKill) and automatically serialized with SerializeTaggedPropertieswith automatic conversion on load for all existing saved references to UPropertiesin packages. TFieldIterator<UField>no longer iterates properties, use TFieldIterator<FProperty>instead FindFieldhas been split into two functions: FindUFieldand FindFProperty. The former should be used to find member Functions or Enums, the latter should be used to find member variables. FPropertiesare no longer constructed with NewObject, use C++ newinstead UStruct::Childrenlinked list no longer contains properties. Use UStruct::ChildPropertiesinstead. FArchivederived classes that collect UObjectreferences now need to override virtual FArchive& operator << (FField*&)to catch FPropertyreferences. All existing cases that needed it should already support it. There's no FProperty::GetOuter(). Use FProperty::GetOwner()instead. The reason is that Outers are an UObjectthing. FProperties are now owned by their parents in a more explicit way. There's a helper structure called FFieldVariantthat acts as a container that can be either UObjector FProperty. It's used in a few places across the codebase to ease the conversion to FPropertieswhere otherwise a separate, almost identical version of the existing function would have to be created. However, the goal is to remove it eventually as the affected system owners will be upgrading their code. In editor builds, there's a helper UPropertyWrapperclass that's used by Details Panels to pass FPropertyreferences around. Again, this was to prevent doing very extensive changes to systems that only actually deal with FPropertiesas editable objects in a handful of places. DevTools AutomationTool New: Added BuildCMakeLib UAT automation script for building third-party libraries that use CMake.Use this to support platform extension and to rebuild some of the third party libraries that the engine provides. For example, if the binary library output is located in Engine/Source/ThirdParty/ExampleLib/examplelib-0.1/lib/PlatformName/Release/ExampleLib.a You could run the following command to build that lib: RunUAT.bat BuildCMakeLib -TargetLib=ExampleLib -TargetLibVersion=examplelib-0.1 -TargetConfigs=relase+debug -TargetPlatform=PlatformName -LibOutputPath=lib -CMakeGenerator=Makefile -CMakeAdditionalArguments="-DEXAMPLE_CMAKE_DEFINE=1" -MakeTarget=all -SkipSubmit The -LibOutputPath command line is to override the directory that libraries are stored in, the default is to not place the output inside an additional directory. The script is currently intended to be run once for each platform, as the expected CMakeGenerator may differ per platform. The script will attempt to find the CMakeLists.txt in a number of locations, first in the root of the library directory for example, Engine/Source/ThirdParty/ExampleLib/examplelib-0.1/then in BuildForUEinside that root and then a per platform BuildForUE,which may either be inside the platform extension's source e.g. Engine/Platforms/PlatformName/Source/ThirdParty/ExampleLib/examplelib-0.1/BuildForUE or inside the third party root's directorye.g. Engine/Source/ThirdParty/ExampleLib/examplelib-0.1/BuildForUE/PlatformName. -TargetLibSourcePath= can be used if the source for the library is external to the engine and only the binaries are stored with the engine. In that case the path should be to the root where the CMakeLists.txt is stored. UnrealBuildTool Changes to child plugins and platform extensions: Added support for white-listing additional plugins in the child plugin. Added support for adding additional modules to child plugin extensions. Child plugins can now override SupportedTargetPlatforms if it is not defined in the parent. Monolithic programs inside platform extensions can now output to the Binaries directory in the extension instead of the main Engine binaries directory. Previously, if SupportedTargetPlatforms was not defined in the parent plugin but was defined in the child plugin, it would have been ignored, meaning all platforms would be supported. Child plugins can now override SupportedTargetPlatforms, which means that if SupportedTargetPlatforms is not defined in the parent plugin, it will start from a base list of no platforms being supported for that plugin. Child plugins cannot remove SupportedTargetPlatforms from other child plugins. If you have child plugins where the parent plugin does not use SupportedTargetPlatforms, you will need to make sure to remove the SupportedTargetPlatforms from the child to return to the original functionality. Editor Sequencer The IMovieScenePlayer interface's UpdateCameraCut method was changed: it now takes the new camera object, and a parameter structure. The parameter structure contains what was previously passed as loose arguments, plus some new blending information. Gameplay Framework Replicated properties in AActor, UActorComponent, and USceneComponent are no longer marked deprecated, but are now private. Replace direct usage of these variables with calls to accessor functions. AActor::bAutoDestroyWhenFinished is now private. Replace direct references to this variable with calls to AActor::GetAutoDestroyWhenFinished and AActor::SetAutoDestroyWhenFinished. Networking Moved ResizableCircularQueue.h from Engine/Net to Core/Net. Anywhere that you have included this file, change "Engine/Net/ResizableCircularQueue.h" to "Core/Net/ResizableCircularQueue.h". Socket Subsystem The commandline flag "-PRIMARYNET" has been dropped due to its obsolescence; GetLocalHostAddr will perform the same operation without the flag specified. Online The Steamworks build script now supports updating or changing the SteamSDK to specific versions. In Steamworks.build.cs, the variable SteamVersionNumber now controls all the versioning for the SteamSDK. Changes to IVoiceChat interface: VoiceChat.h has moved from Engine/Source/Runtime/Online/Voice/Public/Interfaces to a new VoiceChat header-only plugin in Engine/Plugins/Online/VoiceChat/VoiceChat/Source/Public. This header can be included by adding "VoiceChat" to the PublicIncludePathModuleNames/PrivateIncludePathModuleNames in Build.cs files as necessary. The variable FVoiceChatResult::bSuccess has been replaced with the function FVoiceChatResult::IsSuccess. Added an enum called FVoiceChatResult::ResultCode, providing a success value and a variety of error values. FVoiceChatResult::ErrorCode changed from an int to an FString. This avoids collisions, and allows for both shared error codes (such as errors.com.epicgames.voicechat.not_initialized), and implementation-specific ones (such as errors.com.epicgames.voicechat.vivox.missing_config). The intent is that errors will be mapped to one of the common error categories in EVoiceChatResult if possible (or EVoiceChatResult::ImplementationError if none apply), and to one of the common error codes in VoiceChatErrors.h. If no common error codes apply, then the implementation can provide its own error codes (see VivoxVoiceChatErrors.h for reference). FVoiceChatResult::Error has been renamed to ErrorDesc. The intent is for this to provide additional information for logging. Changes to VivoxVoiceChat plugin: Errors originating from the plugin (not the SDK) which were previously reported with a small negative integer and FString (for example, -2 and "Not Connected"), have been changed to provide error categories and string codes (for example, EVoiceChatResult::NotConnected and errors.com.epicgames.voicechat.not_connected). Numeric errors originating from the Vivox SDK itself were previously reported directly in the ErrorCode. These are now mapped in the following ways: Where possible, some errors, e.g. VX_E_NOT_INITIALIZED are mapped to common error categories and codes, in this case EVoiceChatResult::NotInitialized and "errors.com.epicgames.voicechat.not_initialized". Some errors are not mappable to common error codes and are therefore mapped to vivox specific error codes, with the error category EVoiceChatResult::ImplementationError. For example, VX_E_CALL_TERMINATED_KICK is mapped to the error code "errors.com.epicgames.voicechat.vivox.kicked_from_channel". Any errors that are not mapped to specific error codes are reported as EVoiceChatResult::ImplementationError and "errors.com.epicgames.voicechat.vivox. ". The actual numeric error code and status string sent by the SDK are provided in FVoiceChatResult::ErrorDesc in the format "StatusCode=%d StatusString=[%s]" \ Please look in VivoxVoiceChat.cpp and especially FVivoxVoiceChat::ResultFromVivoxStatus to see the current mapping. USocialManager::bLeavePartyOnDisconnect has moved to USocialSettings. Config files referencing the old location will need updating. Platforms All Mobile Renamed the mobile "Support Distance Field Shadows" setting to "Support Pre-baked Distance Field Shadow Maps" and improved the tooltip to reduce confusion with the desktop Distance Field Shadows feature. Android Updated Android toolchain to NDK 21 (20 is also supported for x86_64). See the new Android setup documentation using Android Studio and Engine/Extras/Android/SetupAndroid scripts. Refactored com.epicgames.ue4.network.NetworkChangedListener to com.epicgames.ue4.network.NetworkConnectivityClient.Listener. The NetworkChangedManager now conforms to the com.epicgames.ue4.network.NetworkConnectivityClient interface. Existing calls to NetworkChangedManager.addListener(Listener listener) and NetworkChangedManager.removeListener(Listener listener) will supply the above updated Listener class instead of the old NetworkChangedListener. The NetworkChangedManager now determines the connectivity through a combination of Android system network callbacks to determine connectivity, then verifies connectivity by performing an empty HEAD request to " ". Should this request fail, the manager will indefinitely continue to retry against this URL with an exponential back-off up to the NetworkChangedManager.MAX_RETRY_SEC. This constant defaults to 13 seconds. OpenGL on Android will now use a separate thread (RHIT) for graphics commands submission. Older engine versions had RHIT disabled by default. To disable RHIT in 4.25, add r.OpenGL.AllowRHIThread=0 into your project DefaultEngine.ini. Rendering HLOD proxy meshes can now cast dynamic shadows.If you want to keep the old behavior without adjusting your data, you can set the r.HLOD.ForceDisableCastDynamicShadow cvar to 1 FX - Niagara Niagara now features the Niagara Platform Set, a new feature for controlling which platforms use certain aspects of Niagara. Current users include: Emitter Enabled/Disabled switch System-level scalability settings in EffectType, and overrides in System Emitter-level scalability settings in EffectType, and overrides in Emitter This entirely replaces the previous Detail Level feature. Users now select from Quality levels as a basic use case for scaling across platforms. Further fine grained control can be achieved by setting per device profile overrides to this. For example, all Quality levels could be enabled but Android, iOS, or some subset of mobile devices could be specifically disabled. Quality Level is controlled by fx.Niagara.QualityLevel. Emitters are optionally pruned from cook when disabled by fx.Niagara.PruneEmittersOnCook. Old projects that had custom setup for Niagara Detail Level will have to update their systems to the new setup. Projects using default detail levels will automatically update. Updated module graph UI to better support parameter default modes. Changes: "Value" mode variables do not allow connections to their input pins "Binding" mode variables are hidden completely Changing the default value in the details panel automatically updates all the graph nodes Existing graphs are validated and updated on PostLoad Old assets should automatically be updated to the new schema. If the default value pins are connected to a subgraph to initialize the value, then the default mode is automatically set to "Custom", otherwise it is set to "Value" and the pins are marked as read-only. Data Interface functions which need per-instance data can now be called from custom HLSL on CPU emitters. This required changing the order in which parameters are passed to DI functions on the CPU. Specifically, the instance data pointer is now the first input parameter, instead of the last. If you have custom data interfaces which use per-instance data, you need to update the code to read the user pointer first instead of last, for example: ``` VectorVM::FExternalFuncInputHandler FirstInput(Context); VectorVM::FExternalFuncInputHandler SecondInput(Context); VectorVM::FUserPtrHandler InstData(Context); // last input VectorVM::FExternalFuncRegisterHandler Output(Context); ``` becomes: ``` VectorVM::FUserPtrHandler InstData(Context); // first input VectorVM::FExternalFuncInputHandler FirstInput(Context); VectorVM::FExternalFuncInputHandler SecondInput(Context); VectorVM::FExternalFuncRegisterHandler Output(Context); ``` Updated Niagara modules, dynamic inputs and functions to use the new explicit RandomRangeFloat and RandomRangeInteger functions from CL # instead of the old RandomRange helper. This might have breaking changes for non-deterministic CPU-side random numbers, as the upper bounds will be off by one compared to earlier for integer based types (booleans, enums, regular integers). Added FGPUSortManager that handles different GPU sort tasks. The current clients for it are Cascade and Niagara. Added a framework to allow any system to send GPU sort tasks (like sorting particles).. Set "Gap Correction Amount" to 0 on any "Generate Location Event" modules you are using. If you are performing manual position extrapolation on received events, disable it and verify that the values are correct. Manual extrapolation may still be necessary for spawn events if the source emitter is using interpolated spawning, since there's no equivalent for interpolated spawning on event scripts. Niagara ribbons are now preserving multi-ribbon ordering, in order to prevent random flickering when the camera moves around. Disabled multi-ribbon ordering when using opaque materials. Fixed random flickering when using multi-ribbon in Niagara. Added support for uint32 vertex indices when using Niagara ribbons. Fixed a bug with Niagara ribbons when requiring more than 16-bit indices. Known Issues For a complete listing of known issues affecting Unreal Engine 4.25, please see the Unreal Engine Public Issue Tracker.
https://docs.unrealengine.com/4.26/zh-CN/WhatsNew/Builds/ReleaseNotes/4_25/
2021-09-17T02:53:37
CC-MAIN-2021-39
1631780054023.35
[array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_1.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_2.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_3.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_7.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_8.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_10.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_11.png', '各向异性(4.25)'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_12.png', '各向同性(4.24)'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_13.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_14.png', '4.25透明涂层'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_15.png', '4.24透明涂层'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_16.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_17.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_18.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_19.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_20.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_21.jpg', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_22.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_24.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_25.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_26.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_27.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_28.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_29.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_30.png', 'Launch Storyboard Options'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_31.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_32.png', 'image alt text'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_34.png', '逐像素透射(4.25)'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_33.png', '非逐像素透射(4.24)'], dtype=object) array(['./../../../../../Images/WhatsNew/Builds/ReleaseNotes/4_25/image_36.png', 'image alt text'], dtype=object) ]
docs.unrealengine.com
Directory service You: - OU - Organizational Unit - DC - Domain Component About connecting the appliance to a directory serviceAbout connecting the appliance to a directory service In the Management Console, the Users > Directory Tree displays a hierarchical view of Users and Groups Supported protocols When binding to a directory service, the App Layering appliance is compatible with the following secure socket and transport layer protocols: - Secure Socket Layer: - SSL 3.0 - Transport Layer Security: - TLS 1.1 - TLS 1.2 the software discovers that a user is no longer an object in the directory service, it classifies the user as abandoned (you can view this information in the Information view for the user). Create a directory junctionCreate a directory junction Select Users > Directory Service. Select Create Directory Junction in the Action bar. This opens the Create Directory Junction wizard. - In the Connection Details tab, specify the details for the directory server. - Directory Junction Name - This name becomes the name of the folder that you see in the tree view. You can use any name, including the name of a domain in your directory service tree. - Server address - This is the name for the server you will use for the directory service. (IP Address or DNS Name) - Port - Specify the port number for communicating with the directory server. - SSL check box - Select this if you want to use Secure Sockets Layer (SSL) communication. If certificate errors occur, the wizard displays a list of these errors. If you know it is safe to ignore them, select Ignore Certificate Errors. - Test Connection - Click to verify that the appliance can connect to the directory service. - In the Authentication Details tab, enter the authentication details for a user who has permissions to search the directory service. - Bind Distinguished Name - To determine the correct syntax for the Bind DN or user name, see the documentation for your directory. The following examples show some of the ways you can specify a user for the directory service: - domain\username - [email protected]. - Bind Password. - Enter the password - Test Authentication - Click to verify that the connection to the directory server is valid. - In the Distinguished Name Details tab: - Specify where the software should start searching for users and groups in the remote directory service. Example: To start the search at the Group B Organizational Unit at the root of a domain, you would enter the following Base Distinguished Name: OU=GroupB, DC=mydomain, DC=com - Click the Test Base DN button to make sure that the Base Distinguished Name is valid. If you receive one of these messages, edit and retest the Base Distinguished Name: - A directory junction with this Distinguished Name already exists. - This Distinguished Name is already accessible through an existing Directory Junction. - This Distinguished Name encompasses at least one existing Directory Junction that will be replaced by this new one. In the Attribute Mapping tab, enter the names of directory service attributes that you want to map to the local attributes or use the default settings. Note: To change the mapping from local attributes back to default mappings, click Use Defaults. - In the Confirm and Complete tab, verify the Directory Junction settings, enter a comment if required, and click Create Directory Junction. If you enter comments, they appear in the Information view Audit History.
https://docs.citrix.com/en-us/citrix-app-layering/4/manage/users/directory-service.html
2021-09-17T05:18:46
CC-MAIN-2021-39
1631780054023.35
[]
docs.citrix.com
Upgrade a Standalone to 4.2¶ On this page The following steps outline the procedure to upgrade a standalone mongod from version 4.0 to 4.2.. Prerequisites¶ Standalone. replica set, see Upgrade a Replica Set to 4.2. - To upgrade a sharded cluster, see Upgrade a Sharded Cluster to 4.2.
https://docs.mongodb.com/v5.0/release-notes/4.2-upgrade-standalone/
2021-09-17T04:19:49
CC-MAIN-2021-39
1631780054023.35
[]
docs.mongodb.com
Feature-Based Transformers A Feature-Based Transformer is one which processes one feature at a time, in isolation from all other features. Some examples of Feature-Based transformers are: - AreaCalculator and LengthCalculator: These transformers perform calculations on only one feature at a time. The result of measuring one feature has no impact on the measurement of another. - CenterPointReplacer: – Each “center of gravity” (geographic center) calculation is unique and independent of other features. - 3DForcer: Each feature is given Z values to its vertices, but one at a time and without affecting the Z values of other features. Usage Notes: - Feature-Based Transformers don’t always need to process each feature in the same way. The use of attributes for parameter values means the process can vary with the attribute - for example, using an attribute value in the 3DForcer elevation parameter. - Aggregates: Each aggregate is considered a single feature; for example the measurement of the area of a multi-polygon feature returns a single value, not the value of each component of the aggregate. See Also: About Group-Based Transformers
http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Transformers/Feature_Based_Transformers.htm
2021-10-15T22:47:30
CC-MAIN-2021-43
1634323583087.95
[]
docs.safe.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Information about a security profile and the target associated with it. Namespace: Amazon.IoT.Model Assembly: AWSSDK.IoT.dll Version: 3.x.y.z The SecurityProfileTargetMapping type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TSecurityProfileTargetMapping.html
2021-10-15T23:42:43
CC-MAIN-2021-43
1634323583087.95
[]
docs.aws.amazon.com
Rotations If you use a lot of campaigns with one Conversion funnel, you can use the rotations system. Create a rotation and assign it to several campaigns. Now changing traffic flow settings in a rotation will be applied to all campaigns with that rotation. Create a rotation Go to Rotations > Create. Name, group Name your rotation and create a group for it (you'll be able to sort rotations by group, it's pretty neat). Campaigns Click + Campaigns to add campaigns which will work according to this rotation. Be careful — once the rotation is saved, the traffic will be distributed according to the rotation settings, and not the campaign settings. Traffic distribution Now you need to set paths, landng pages, offers, and rules just like you did it for your campaigns. (more...) How do I apply rotations to campaigns? The are two ways to do this: 1. The first way is described above. Go to (Rotations > Create > + Campaigns). 2. Specify a rotation in the campaign settings. Let's get into the second way. Go to the campaign settings. At the upper right side of the page you'll see the current traffic distribution option — Custom. To apply a rotation to a campaign, select it in the dropdown menu. Distribution settings will be loaded to the campaign. Click Save. You can select several campaigns on the Campaigns tab and assign a rotation to them by clicking Edit. If you change distribution settings in the campaign, the rotation will change back to Custom. Binom memorizes custom distribution settings, and you can always get back to them by selecting Custom in the dropdown menu.
https://docs.binom.org/rotations.php
2021-10-16T00:13:10
CC-MAIN-2021-43
1634323583087.95
[array(['images/rotation-2.png', None], dtype=object) array(['images/rotation-4.png', None], dtype=object)]
docs.binom.org
The Keystore Manager is a window which allows you to create, configure or load Android keystores and keys. For further information see Android’s Keys, Certificates and Keystores documentation. To open the Keystore Manager, open the Android publishing window and select the Keystore Manager button. Note: You cannot change this keystore, key or certificate information once you sign your Android app. Use the Keystore dropdown to create a new keystore. Creating a new keystore automatically creates a new key. Fill in the New Key fields in the keystore manager. When you create a new keystore, the storage location you choose changes the default location that Unity opens your file explorer in to save your file. You can still change this after the file explorer opens. - Choose Anywhere to open the file explorer. By default, Unity stores your keystore inside your project folder. However, you can store this anywhere on your machine. If you store your keystore outside of your project folder, Unity saves an absolute path. - Choose In Dedicated Location to open the file explorer in a custom default location. By default, this path points to $HOME/ on MacOS and to %USER_HOME%\ on Windows. To define a new project-wide dedicated location, go to Unity > Preferences > External Tools > Android > Keystores Dedicated Location, then click Browse to select a location or enter a path in the text box. If you store a keystore to a dedicated location, Unity saves a relative path. This relative path is equal to the dedicated location paths on other machines. Therefore, the folders don’t need to be in the same place on both machines. Note that if you save the new keystore outside of your project folder or a shared directory, collaborators on your project might not have access to it. Use the Keystore dropdown to load an existing keystore. If you have multiple keys in your keystore, select a key for your project in the Android Publishing Settings Project Key fields. Note: You can also choose an existing keystore directly from the Android Publishing Settings, without using the Keystore Manager window. Use the following steps to store multiple keys in a keystore.
https://docs.unity3d.com/es/2020.1/Manual/android-keystore-manager.html
2021-10-15T22:43:07
CC-MAIN-2021-43
1634323583087.95
[]
docs.unity3d.com
Updating Extensions Automatic Updates - Activate your license. - Look for the update icon to indicate you have updates. - If an update is available for your extension it will show up here and can be updated with a few clicks. Manual Updates - Grab the latest version from the downloads tab in your account. - Navigate to Plugins page and deactivate the old version of the extension. - Delete the old version. - Click the Add New button at the top or navigate to Plugins -> Add New. - Click the Upload button and choose the new version .zip file you got in step 1. - Upload and then activate the new version.
https://docs.wppopupmaker.com/article/81-updating-extensions
2021-10-15T22:40:06
CC-MAIN-2021-43
1634323583087.95
[]
docs.wppopupmaker.com
User Guide¶ To get the most out of using our SDK, it’s useful to understand the basic concepts and principals we used when we designed it. It is also important that you are familiar with the F5® BIG-IP® and, at a minimum, how to configure BIG-IP® using the configuration utility (the GUI). More useful still would be if you are already familiar with the iControl® REST API. - Basic Concepts - REST API Endpoints - Python Object Paths - Coding Example - Further Reading
https://f5-sdk.readthedocs.io/en/1.0/userguide/index.html
2021-10-16T00:44:22
CC-MAIN-2021-43
1634323583087.95
[]
f5-sdk.readthedocs.io
For Developers¶ We welcome contributions to gcsfs! Please file issues and requests on github and we welcome pull requests. Testing and VCR¶ VCR records requests to the remote server, so that they can be replayed during tests - so long as the requests match exactly the original. It is set to strip out sensitive information before writing the request and responses into yaml files in the tests/recordings/ directory; the file-name matches the test name, so all tests must have unique names, across all test files. The process is as follows: Create a bucket for testing Set environment variables so that the tests run against your GCS credentials, and recording occurs export GCSFS_RECORD_MODE=all export GCSFS_TEST_PROJECT='...' export GCSFS_TEST_BUCKET='...' # the bucket from step 1 (without gs:// prefix). export GCSFS_GOOGLE_TOKEN=~/.config/gcloud/application_default_credentials.json py.test -vv -x -s gcsfs If ~/.config/gcloud/application_default_credentials.json file does not exist, run gcloud auth application-default loginThese variables can also be set in gcsfs/tests/settings.py Run this again, setting GCSFS_RECORD_MODE=once, which should alert you if your tests make different requests the second time around Finally, test as TravisCI will, using only the recordings export GCSFS_RECORD_MODE=none py.test -vv -x -s gcsfs To reset recording and start again, delete the yaml file corresponding to the test in gcsfs/tests/recordings/*.yaml.
https://gcsfs.readthedocs.io/en/latest/developer.html
2021-10-15T22:36:24
CC-MAIN-2021-43
1634323583087.95
[]
gcsfs.readthedocs.io
What’s new¶ News in 1.4.3¶ Platform changes:¶ -:¶ -:¶ -:¶ -. News in 1.4.2¶ Node/plugin changes:¶ -:¶:¶ -. News in 1.4.1¶ Node/plugin changes:¶ -:¶:¶ -. News in 1.4.0¶. Node/plugin changes:¶ -:¶ -. News in 1.3.5¶ Node/plugin changes:¶ - Calculations in Calculator can be deselected for output enabling better support for intermediary calculations. This also enables intermediary calculations to have different lengths from output columns. - The input table(s) in Calculator List. - Creating a custom data type -hesian -¶ - nodes, Figure Compressor, Layout Figures in Subplots, Export Figures - Calculator for a single Table added to Library - New Filter ADAFs node with preview plots and improved configuration gui - Manually Create Table - Signal generator nodes for generating Table(s) of sinus, cosines or tangents - Matlab Tables node - Hold value Table(s) - Flatten List - Propagate Input category in ADAFs - Table attributes are merged for the HJoin nodes - Allow setting fixed width/height for TextBoxes in Report Template - Easier date settings in Plot Table - Rewrote Matlab Tables and Matlab Calculator nodes Exporters/Importers changes¶ -¶ - Faster reading of writing of intermediate files - Faster ADAF copy methods - Improved length handling for tables - Faster execution of Select rows in Table(s) - Faster execution of Table and Select category in ADAFs - Responsive preview for Calculator List and Calculator API changes¶ -¶ - but there are new, more succinct, ways of writing nodes for 1.3.x that are not backwards compatible with 1.2.x. When writing new nodes, consider which older versions of the platform that will be used. New features¶ - Generic types - Higher order functions: Lambda, Map and Apply - Official, and much improved, support for Linked Subflows - Official support for Locked Subflows - New library structure using tags New nodes¶ - New generic versions of all list operations - Ensure columns in Tables with Table - Conditional Propagate - Extract Lambdas builtin node for reading lambda functions from existing workflows User interface¶ -¶ - Simpler APIs for writing nodes. See Node writing - New method in ADAF API: Group.number_of_rows - Configuration widgets can expose a method called save_parameters which is called before the gui is closed. See Custom GUIs - Added API (parameter helper): List parameter widgets emit valueChangedsignal - Linked/locked subflows¶ - Export Datasources create missing folders - Fixed Export Texts Other improvements¶ - Added default workflow environment variables SY_FLOW_FILEPATH, SY_FLOW_DIR News in 1.2 series¶. The bundled python installation has been upgraded with new versions of almost every package. Added to the packages is scikit-learn, used for machine learning. Our investigations suggest that the new package versions are reasonably compatible with old nodes and cause no significant differences for the standard library. The node wizard. - Creating a custom data type. - Significantly improved handling of unicode paths including the ability to install Sympathy and third party libraries in a path with unicode characters Nodes and plugins¶ - Added CarMaker type 2 ERG ADAF importer plugin called “CM-ERG” - Plugins can now export to non-ascii filenames - Fixed MDF export of boolean signals - Added generating nodes for empty Table, Tables ADAF and ADAFs. - Convert column nodes can convert to datetime - Calculator node can produce compact output for length matched output - Lookup nodes handles both event column and other columns with datetimes - Time Sync nodes “SynchronizeLSF” strategy should work as expected again. The Vjoin index option is now only used for the ”Sync parts” strategy New command line options¶ See Sympathy Start options for more info. -¶ - Libraries must now have only a single python package in their Common folders. See Node writing.
https://sympathy-for-data.readthedocs.io/en/latest/src/news.html
2021-10-16T00:01:15
CC-MAIN-2021-43
1634323583087.95
[]
sympathy-for-data.readthedocs.io
Create Backup Policies Step-by-step guide on how to create a backup policy for your AWS workloads in Druva CloudRanger. This article provides a step-by-step guide on how to create a backup policy for your AWS workloads in Druva CloudRanger. You can create policies to backup your Amazon EBS, EC2, RDS, and Redshift resources. To create a backup policy: Step 1: On the top navigation bar, select Policies, and then click Create Backup Policy. Step 2: Specify the backup Schedule. - Add a Name and a brief Description of your policy. - Specify the backup Frequency. - Click Save & Continue. Step 3: Specify the backup Retention criteria. Note: Druva CloudRanger follows the Grandfather-Father-Son (GFS) retention model. For more information on retention, please see About Retention for Backup Policies. - Specify the Retention criteria. The standard retention options are pre-populated that you can modify based on your business requirements. Note: Cross-region and cross-account backups are not supported for DynamoDB and Redshift instances. - Click Save & Continue. Step 4: Specify the Resources for backup. - On Include Resources, click Add to identify resources that you wish to include in the backup. - On the Identify Resources page, specify the filter criteria to identify specific resources to include or exclude: How Include/Exclude conditions apply on Druva CloudRanger - You can create multiple include and exclude rules. - Include rules: When multiple include rules are defined, this translates to an ‘OR’ condition. In other words, resources are matched against each include rule, and do not have to meet all specified conditions concurrently. - Exclude rules: Exclude rules take precedence when the same resource is matched based on the include and exclude criteria selected. An exception to this is when an EC2 resource is selected within include and an EBS volume within exclude. In such a scenario, the EBS volume that is part of EC2 will not be excluded from the backup. - When multiple tags are defined as part of include/exclude, this translates to an ‘AND’ condition. - Similarly, on Exclude Resources, click Add to identify specific resources that you wish to exclude from the backup. - The resources identified are then displayed under Include or Exclude Resources, based on your selection criteria. - To eliminate a specific resource in the list from your backup policy, select the checkbox against that resource, and click Remove. Note: The Backup Copy Encryption is applicable only if one or more resources included in the policy is encrypted, and a backup is to be generated. If the source resource is encrypted, then an Encryption Key is applied to the backup operation. The Backup Copy Encryption options are displayed only if a cross-region or cross-account backup is to be generated for encrypted snapshots. - To backup encrypted resources, you will need to define the association of keys between the source and the target regions for that backup. To do this, select the Target Key for each target region specified. - Under Resource Backup Options, you have the option to create backups of EC2 resources as AMIs or as snapshots. In the case of AMIs, you may also select your reboot preferences. - Click Save & Continue. Step 5: Specify Additional Options for the backup. - Select the Execute VSS Consistent Scripts (Windows Only) checkbox to generate consistent snapshots for any Windows server with VSS installed. Note: If the selected Backup Policy has servers defined that do not have VSS installed, then a standard AWS EBS snapshot is generated. For more information, please see Generate VSS consistent snapshots for Windows servers. - Under Add Tags to Backups specify the tags to be applied to each backup generated by the policy. Tags act as metadata to help identify and organize your AWS resources. Based on the Key selected, you will need to specify the appropriate Value. For example: Key: Created by Policy; Value: New Key: Origin; Value: Specify Origin ID - Select the Inherit tags from Source checkbox to inherit or retrieve tags from the Origin servers and apply them to backups generated by the policy. - Click Save. Note: To manage tags on existing snapshots, please refer to AWS Management Console - Tag Editor. The backup policy is now successfully defined and is displayed on the main Backup Policies page with the State toggle set to Active. Modify backup policy state You can choose to set a backup policy to Active or Disable it. When a new Backup Policy is defined, the State toggle is set to Active by default. Note: Disabling a policy suspends all associated activities including backup retention and cleanup. To modify the state of the backup policy, click the State toggle icon against the backup policy.
https://docs.druva.com/CloudRanger/013_Manage_Backup_Policies/Create_and_Manage_Backup_Policies/13_Create_backup_policies
2021-10-16T00:04:05
CC-MAIN-2021-43
1634323583087.95
[array(['https://docs.druva.com/@api/deki/files/62024/BP3.png?revision=1', 'BP3.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/62025/BP4.png?revision=1', 'BP4.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/62027/BP5_No_S3.png?revision=1', 'BP5 No S3.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/62028/BP6.png?revision=1', 'BP6.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/56948/Encryption2.png?revision=1', 'Encryption2.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/62032/BP9_crop.png?revision=1', 'BP9 crop.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/62035/BP_State.png?revision=1', 'BP_State.png'], dtype=object) ]
docs.druva.com
Struct rusty_santa:: Group [−] [src] pub struct Group { /* fields omitted */ } A group of people that wants to draw names. Methods impl Group[src] fn new() -> Self[src] Create a new Group. fn add(&mut self, name: String)[src] Add a name to the group. fn exclude(&mut self, from: String, to: String)[src] Make sure that person A does not have to give person B a gift. fn exclude_pair(&mut self, a: String, b: String)[src] Make sure that person A and B don't have to give each other gifts. fn contains_name(&self, name: &str) -> bool[src] Return whether the specified name is alread in the group. fn assign(&self) -> Result<Vec<(String, String)>, AssignError>[src] Run the name assignment!
https://docs.rs/rusty-santa/0.1.0/x86_64-apple-darwin/rusty_santa/struct.Group.html
2021-10-15T23:46:23
CC-MAIN-2021-43
1634323583087.95
[]
docs.rs
PointCloudCreator Typical Uses - Testing a workspace - Creating point cloud placeholders How does it work? The PointCloudCreator creates a single point cloud feature, based on specified size, location, spacing, rotation, and components. Size may be defined either by an origin with X and Y Length (in ground units), or by extents (lower left and upper right coordinates). Patterns may be created for z values and/or any other component. Z Values Z values may be applied in a selection of patterns: Components Other components may also be created and patterned. Standard component names may be selected from the drop-down, and user-defined names can be entered directly. The component Data Type, Min Value, and Max Value are specified. All component values are applied in the same selected Value Pattern, shown here as an intensity component on a sloped point cloud: Examples In this example, we will create a point cloud for testing purposes, enabling us to run and test a workspace before the actual data is available. A PointCloudCreator is added to the workspace, and feature caching (A) allows us to view the point cloud as it exits the transformer. In the parameters dialog, we define the size of the point cloud by choosing Size Specification > Origin and Size. Both the X and Y Component Types are Int32 (32-bit integers), and the X Length and Y Length are set to 1000, with Average Spacing (for points) along both axes as 1. The origin is 0,0 as defined by the X and Y Lower Left Coordinates. Z values will be created in a Trigonometric pattern, and so the Z Component Type must be floating-point. Real32 is selected. The values will range between the Z Minimum and Z Maximum values of 0 and 100. One component is added, classification, and its values will range from 0 to 18 in a Checkered Pattern. These Z Values and Component Values selections will provide a range of values to test against in the workspace. A point cloud feature is output as specified, and is passed along to the next transformer and can now be used in the workspace. Viewing the feature from an oblique angle in the FME Data Inspector, note that the individual points have z and classification values interpolated between the specified minimum and maximum values. Usage Notes Choosing a Point Cloud Transformer FME has a selection of transformers for working specifically with point cloud data. For information on point cloud geometry and properties, see Point Clouds (IFMEPointCloud). Configuration Input Ports This transformer has no input ports. Output Ports Parameters The Component Values table is used to specify additional components to be added to the point cloud. One complete line is required perCreator on the FME Community. Examples may contain information licensed under the Open Government Licence – Vancouver and/or the Open Government Licence – Canada. Keywords: point "point cloud" cloud PointCloud LiDAR sonar
http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Transformers/Transformers/pointcloudcreator.htm
2021-10-15T23:39:12
CC-MAIN-2021-43
1634323583087.95
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/PointCloudCreatorExample09.png', None], dtype=object) array(['../Resources/Images/PointCloudCreatorExample10.png', None], dtype=object) array(['../Resources/Images/PointCloudCreatorExample11.png', None], dtype=object) array(['../Resources/Images/PointCloudCreatorExample12.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.safe.com
! WWW: IRC Primary: irc.freenode.net #studioware IRC Alternative: irc.oftc.net #studioware Mailing List: Packages for 32 and 64 bit have been released for: Slackware-14.1 Slackware-14.0 Slackware-13.37
http://docs.slackware.com/studioware:start?rev=1393766932
2021-10-15T23:18:53
CC-MAIN-2021-43
1634323583087.95
[]
docs.slackware.com
. For inSync Client v5.9.9 installed on macOS Mojave: inSync requires user permission to restore the data to the following locations. Select OK to allow inSync to access these locations: Location Services Contacts Calendars Reminders Photos If inSync Client users do not take any action on the prompts, inSync will pause the ongoing backup and it will remain in that state until the user takes an action..
https://docs.druva.com/005_inSync_Client/inSync_Client_5.9.6_for_inSync_On-Premise/Backup_and_Restore/020_Restore_and_download_backup_data/020_Restore_data_by_using_inSync
2021-10-16T00:19:45
CC-MAIN-2021-43
1634323583087.95
[]
docs.druva.com
HJoin ADAFs. - class node_hjoin_adaf. HJoinADAFsList[source]¶ Horizontal join of a list of ADAFs into one ADAF. This means that all systems in each ADAF is are congregated into one ADAF with many systems. Using the option ‘Use index as prefix’ will result in columns and systems results in systems’ names getting the list index of the ADAF as a prefix to keep systems with the same names. Meta data and results will be joined horizontally with the same prefixing. Unchecking the option results in the same behaviour as Hjoin ADAFs where all but the latest instance are discarded.
https://sympathy-for-data.readthedocs.io/en/1.4/src/Library/sympathy/data/adaf/HJoinADAFsList.html
2021-10-16T00:32:04
CC-MAIN-2021-43
1634323583087.95
[]
sympathy-for-data.readthedocs.io
“Tis the season to read Trump books, fala la la lah…..” It appears this summer is the publishing season for monographs on the final year of the Trump administration. A number of important books are available, i.e., Yasmeen Abutaleb and Damian Paletta’s NIGHTMARE SCENARIO, Michael Wolff’s LANDSLIDE: THE FINAL YEAR OF THE TRUMP PRESIDENCY; Michael C. Bender’s “FRANKLY WE DID WIN THIS ELECTION,” and Carol Leonnig and Paul Rucker’s sequel to A VERY STABLE GENIUS, I ALONE CAN FIX IT: DONALD J. TRUMP’S CATASTROPHIC FINAL YEAR. All zero in on Trump’s final year in office which has turned out to be one of the most consequential years in American history since the flu pandemic of 1918. Leonnig and Rucker’s latest effort continues to take readers deep inside Trump’s chaotic and impulsive presidency and Leonnig must be recognized as a few months ago she released another important work ZERO FAIL: THE RISE AND FALL OF THE SECRET SERVICE. Leonnig and Rucker are Pulitzer Prize winning reporters for the Washington Post and have had access to numerous sources inside and outside the Trump administration. They have consulted over 140 sources that include senior most Trump administration officials, career government officials, friends, Trump himself, and outside advisors. As in her previous book, Leonnig along with Rucker have produced a well sourced, crisply written monograph that is equal to her previous efforts. What separates the first three years of the Trump administration from the last is that between 2017 and 2019 Trump faced no major crises. It was a period dominated by Trump’s bluster, self-aggrandizement, scandal, and self-preservation, characteristics that were not conducive to the events of 2020. As Trump made loyalty and personal power the number one traits that dominated his management skills his immigration policy that ripped children from their parents, his denigration of the rule of law, his threats to democracy, his support for white supremacy, his contempt for allies are all important, but he faced no economic or military crises, or a public health disaster until his last year in office. ( June 1, 2020, President Donald Trump holds a Bible as he visits outside St. John’s Church across Lafayette Park from the White House) The authors give Trump credit for Operation Warp Speed in helping to develop vaccines in record time, but his ineptitude, back biting, lack of empathy and cruelty in combating the virus through mask wearing and making vaccines a partisan political issues have resulted in the death of over 600,000 Americans and the infection of tens of millions of people with Covid-19. Apart from the pandemic, Trump’s White House oversaw the collapse of the economy, heightened racial tension after the George Floyd murder, conducted political rallies that became super spreader events, and the policy of “law and order” was being manipulated to the point that advisors had to talk him out of giving orders to shoot demonstrators. Trump’s refusal to accept the 2020 election result created further chaos as he resorted to misinformation and lies that the election was stolen from him. He created a “personality cult” that politicized any effort to combat the virus and introduced disastrous treatment options such as bleach and hydroxychloroquine. The result is that 50% of the American people are fully vaccinated and the country is now on the precipice of a fourth wave of the disease as cases are rising in areas that are under or not vaccinated. We have become a country that is split between those who are and those who are not vaccinated. As an aside Trump and his family are also fully vaccinated at the same time, he fueled the distrust in government that led to the January 6th insurrection to overturn the election of Joe Biden. Leonnig and Rucker correctly point out that the concept of the “common good” is alien to Trump as every issue; race, economics, health, immigration, foreign policy, and finally the pandemic is seen through the lens of what he believed to be in his best personal and political interests. Trump’s toolbox of bluster, bullying, and manipulation would not work during a pandemic. Muzzling experts like Dr. Anthony Fauci, picking feuds with public health officials, holding super spreader events, and refusing to be a correct role model during the pandemic helped to spread the virus further. Leonnig and Rucker provide a detailed narrative outlining the cause of the virus, the role of China, and how it spread to Europe and the United States. They relate the efforts of Dr. Robert Ray Redfield Jr., Director of the Centers for Disease Control and Prevention, and the administrator of the Agency for Toxic Substances and Disease Registry. Dr. Anthony Fauci, the Director of the U.S. National Institute of Allergy and Infectious Diseases and the chief medical advisor to the president, Matt Pottinger, former Deputy National Security Advisor of the United States, Dr. Deborah Birx, White House Coronavirus Coordinator, Stephan Hahn, head of the Food and Drug Administration, Francis Collins, head of the National Institute of Health, and others who worked to control the virus but who also were targets of President Trump as he railed against Public Health professionals. (Joint Chiefs Chairman Gen. Mark Milley) The dysfunction inside the Trump administration as it sought to deal with the pandemic are numerous and the authors present a series of them to support their points. Jared Kushner was brought in to deal with the lack of PPE and ventilators. He in turn brought in a bunch of his friends and associates, called “financial whiz kids,” who knew nothing about government procurement or how international markets functioned. One person described them as “the whiz bang crew of numb nuts” as Kushner’s “sourcing team” were in way over their heads and chaos reigned. The turf battles, messaging, fealty to Trump, and inability of advisors to get along dominate the narrative as the White House tried to speak with one voice, but the leaks proliferated. Leonnig and Rucker’s recreation of dialogue between HHS Head Alex Azar, Chief of Staff Mark Meadows, Trump advisor Michael Caputo, White House Director of Strategic Communications Alyssa Farah, and White House Spokesperson Kayleigh McEnany places the reader inside the room where the dysfunction takes place. If there is a hero in Leonnig and Rucker’s account it is Joint Chiefs of Staff Head Mark Milley. A good example of the role he played is depicted in his comments toward Stephen Miller, Trump’s sycophant and immigration guru. After listening to Miller spout his hatred, Milley told him to “shut the fuck up,” and that US troops could not be used against peaceful demonstrators. The conversations that led up to the clearing of Lafayette Square of demonstrators by Park Police, National Guard and others so Trump could have a photo op in front of St. John’s church fit Leonnig and Rucker’s ability to explain why and how things evolved and their implications. Once Defense Secretary Mark Esper spoke out against what occurred the tongue lashing he experienced by Trump is available for all to read. Milley and Esper were in a constant battle with Trump over the politicization of the military as the former president became laser focused on demonstrations in Seattle and Portland and having a large military parade on July 4th. For Milley and Esper it was all about preserving democracy and preserving the constitution. The misinformation seems to appear on every other page as the Trump administration tried to develop a narrative that the president was doing all he could to stave off the virus. At one point Kushner’s “whiz kids” and others were asked to develop new models to offset public health predictions of the number of Covid-19 cases for the future if something was not done. They were to do so by prioritizing economic recovery over public health. This goes along with Trump’s attitude that things were doing well no matter what the evidence reflected. Trump was obsessed with the number of covid-19 victims as it reflected negatively on this reelection. The stock market was another Trump obsession and when it did not cooperate he would blame others and accuse them of “killing him!” Of course, it was all the fault of the “deep state.” Once the senate voted 52-48 not to impeach him, Trump received the memo that no accountability for his actions existed, and he went on his revenge crusade getting rid of the likes of Lt. Colonel Alexander Vindman and Olivia Troye once an advisor to Vice President Pence who publicized the illegalities in dealing with Ukraine. Leonnig and Rucker do not miss a trick even including Melania Trump’s attempts to speak some sense with her husband in February 2020 to no avail. They recount advice particularly by former New Jersey governor Chris Christie among others on how Trump could win reelection but as in the case of his spouse, the former president felt he knew what was best by going after “Sleepy Joe” and discarding a series of sound suggestions. According to the authors the turning point in fighting the virus came with the arrival of Scott Atlas. For Trump, the virus stood in the way of his continuation in office and finally in July 2020 he found a public health official who would parrot his medical beliefs, Dr. Scott Atlas. Atlas was a neuroradiologist at the conservative Hoover Institution who knew nothing about contagious pathogens, but he wanted to open up the economy, refused to endorse the wearing of masks, and was an advocate that the US was approaching herd immunity. Atlas’ views were used to undercut public health officials and he went after Anthony Fauci with abandon. As Leonnig and Rucker shift to discuss the election about halfway through the book they bring the same talent and relentless reporting as Trump ratcheted up misinformation about the virus and every other issue as he tried to ignore the virus, but events would not let him, i.e.; the disastrous super spreader rally in Tulsa in June. By the end of July Trump began his campaign that would result with the accusation that the election was stolen. He suggested on July 30 that the election should be postponed, and fraud dominated mail in voting. Further on September 23, 2020, Trump stated for the first time he might not honor the results of the election if he lost. Tweets concerning the election such as “the most INACCURATE AND FRAUDULENT election in history” still reverberates today as the House began its hearing concerning the January 6th insurrection. The William Barr-Trump relationship receives a great deal of coverage apart from the Attorney-General’s role at the Justice Department. Leonnig and Rucker recount Barr’s repeated attempts to prevent the former president from self-sabotage. Barr, known more for his role in defining the Mueller Report for Trump’s benefit and interfering with the sentencing of Michael Flynn and Roger Stone, offered Trump advice about his campaigns shortcomings and what he needed to do to win reelection. Barr’s political advice was mostly ignored and the author’s recount the numerous examples of how Trump was his own worst enemy in trying to improve his position in the polls visa vie Biden, i.e., allowing Mark Meadows to cut federal funding for the CDC while Redfield was trying to get the billions needed for vaccine distribution once it was available. Rudy Giuliani emerges as a comical figure with his conspiracy theories, distorted advice to Trump over election fraud, and trying to get Trump to just declare victory despite the evidence that he had lost the election to Biden. Leonnig and Rucker’s daily account from January 2020 to January 2021 doesn’t leave out much that occurred or important conversations that took place. They culminate their story of administration incompetence and back biting with the run up to January 6th by explaining why it occurred and who in their view is to blame. The book offers a strong narrative, but is a bit light on analysis and interpretation, but if you are looking for an engrossing recapitulation of the last year you cannot go wrong consulting I ALONE CAN FIX IT.
https://docs-books.com/category/american-political-history/
2021-10-15T23:06:16
CC-MAIN-2021-43
1634323583087.95
[array(['https://s.abcnews.com/images/International/capitol-police-gty-rc-210108_1610107317802_hpEmbed_25x16_992.jpg', 'PHOTO: Trump supporters clash with police and security forces as they push barricades to storm the Capitol in Washington D.C., Jan. 6, 2021.'], dtype=object) array(['https://www.cleveland.com/resizer/3dMZgfCDRXtnOl79v0S39PmhylY=/1280x0/smart/cloudfront-us-east-1.images.arcpublishing.com/advancelocal/BFLBDRPV3NA3LI6G7ML7EDNC5Y.jpg', 'Donald Trump'], dtype=object) array(['https://api.time.com/wp-content/uploads/2020/08/military-2020-election.jpg?w=800&quality=85', 'Joint Chiefs Chairman Gen. Mark Milley speaks at a House Armed Services Committee hearing on Capitol Hill, on Feb. 26, 2020, in Washington. Joint Chiefs Chairman Gen. Mark Milley speaks at a House Armed Services Committee hearing on Capitol Hill, on Feb. 26, 2020, in Washington.'], dtype=object) array(['https://s.abcnews.com/images/Politics/house-electorial-college-protest-05-rt-jef-210106_1609969833575_hpMain_16x9_992.jpg', 'PHOTO: Supporters of President Donald Trump climb on walls at the U.S. Capitol during a protest against the certification of the 2020 U.S. presidential election results by Congress, in Washington, Jan. 6, 2021.'], dtype=object) ]
docs-books.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. The ThingTypeProperties contains information about the thing type including: a thing type description, and a list of searchable thing attribute names. Namespace: Amazon.IoT.Model Assembly: AWSSDK.IoT.dll Version: 3.x.y.z The ThingTypeProperties type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TThingTypeProperties.html
2021-10-16T00:53:07
CC-MAIN-2021-43
1634323583087.95
[]
docs.aws.amazon.com
As of version 1.9.21, WPvivid Backup Pro adds the ability to find and clean unused images in your WordPress media library. With the unused images cleaner, you can launch a scan to your media folder(uploads) to look for images that are not used anywhere on your website, then isolate the unused(or orphan) images to delete or restore them. If you are running a WordPress website handling a large number of images, you can benefit greatly from this feature. It helps free up your disk space hence optimize your website performance. Also, cleaning unused(or orphan) images before performing a backup or migration can effectively increase the process success rate. How to Clean Unused Images in WordPress Media Library? 1. Scan Your Media Folder First, install WPvivid Backup Pro on your website. On your left admin menu, click WPvivid Plugin > Image Cleaner. On the Scan Media tab that opens, click Scan button to run a scan to your media folder, which is …/wp-content/uploads. Then the plugin will start analysing the folder and find unused images in it. It offers a progress bar where you can monitor the live progress of the scan. Note: 1. Currently it only scans JPG and PNG images. 2. Please do not refresh the page while running a scan. 2. Isolate Unused Images Once the scan is done, all the unused images in your media folder will be listed. Now you have options to select specific or all images from the list and isolate them to a folder created by WPvivid Backup Pro, which is …/wp-content/WPvivid_Uploads/Isolate. You can choose to delete or restore the isolated images later. It offers a search box where you enter an image path e.g. 2021/01/wpvivid-backup-plugin-restore.png or 2021/01/ to search specific images if you have many. Choose an isolate method from the drop-down menu, click Apply to start isolating images. 3. Delete or Restore Unused Images All the images you isolated is displayed on the Isolated Media tab, from there you can choose to delete or restore specific or all images. Note: Once deleted, images will be lost permanently. The action cannot be undone, unless you have a backup in place. You can also search specific images by entering an image path in the search box if you have many. If you also want to delete corresponding image urls (that are not used anywhere on your website) from your database, before delete an image, go to WPvivid Settings > Media Cleaner Settings > Check the option ‘Delete Image URL‘ > Save Changes.
https://docs.wpvivid.com/unused-images-cleaner.html
2021-10-15T23:35:48
CC-MAIN-2021-43
1634323583087.95
[]
docs.wpvivid.com
Contact Muting in SQL queries How to write SQL queries excluding muted contacts correctly The CHT has a range of CouchDB databases for storing different types of data. By default, databases all start with the prefix “medic”. The main database, used to store all contact and report data. Data access is protected by API to provide protection on a per document basis. See Also: Database schema conventions Stores the _local/sentinel-meta-data document which stores the sequence of the last processed change in the medic db. This is used so Sentinel can resume from where it left off and process all changes in order. This database also stores metadata about the documents in the “medic” database, such as when it was received and which Sentinel transitions have executed on this doc. The UUID of the metadata doc is the same as the UUID of the “medic” doc with “-info” appended at the end. For example: { "_id": "f8cc78d0-31a7-44e8-8073-176adcc0dc7b-info", "_rev": "2-6e08756f62fa0595d87a3f50777758dc", "type": "info", "doc_id": "f8cc78d0-31a7-44e8-8073-176adcc0dc7b", "latest_replication_date": "2018-08-13T22:02:46.699Z", "initial_replication_date": "2018-08-14T10:02:13.625Z" } Used for documents which are only relevant to a single user, including: See Also: User telemetry To make it easier to perform analysis of all the docs in each user’s “medic-user-{username}-meta” database, Sentinel replicates all the “feedback” and “telemetry” docs into this single database. This is used for reporting, monitoring, and usage analytics. Replication to this database can be enabled via configuration from 3.5.0 and works without configuration from 3.10.0. This is the standard CouchDB database used to configure user authentication and authorization. Used to store documents on the client device to allow for offline-first access. Bidirectional replication is done on the “medic” and “medic-user-{username}-meta” databases. The “medic” database is only partially replicated so the user stores only a subset of the entire CouchDB database for performance and security reasons. See Also: CouchDB replication Used to store data for performant analytical queries such as impact and monitoring dashboards. The CHT uses medic-couch2pg to handle replication of docs from “medic”, “medic-sentinel”, and “medic-users-meta” databases into Postgres. See Also: Data Flows for Analytics How to write SQL queries excluding muted contacts correctly Dealing with out-of-memory errors in couch2pg Invalidating Sessions How to handle conflicts with CouchDB documents How to connect to the PostgreSQL RDBMS server from MacOS How to connect to the PostgreSQL RDBMS server from Windows Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.communityhealthtoolkit.org/apps/guides/database/
2021-10-16T00:13:40
CC-MAIN-2021-43
1634323583087.95
[]
docs.communityhealthtoolkit.org
Conditional Appearance (Manage UI State) - 6 minutes to read The Conditional Appearance module allows you to configure a user interface dynamically. UI customizations are performed on the basis of predefined business rules. This topic provides an overview of the Conditional Appearance module, and describes ways to utilize its functionality in your applications. To see the module in action, refer to the FeatureCenter demo installed with XAF. This demo is located in the %PUBLIC%\Documents\DevExpress Demos 21.2\Components\eXpressApp Framework\FeatureCenter folder by default. Overview The Conditional Appearance module is represented by the platform-agnostic ConditionalAppearanceModule module project. To use the module in an XAF application, you can add it to a module or an application project. To do this, invoke the Module Designer or Application Designer, and drag the ConditionalAppearanceModule item from the Toolbox to the Modules panel. Make sure to rebuild your solution after making changes in the Designer.). Currently, the following List Editors support conditional appearance. In addition, conditional appearance can be applied to built-in Property Editors, Static Text Detail View Items (see IModelStaticText), Layout Items, Groups and Tabs (see IModelViewLayout), and Actions. The Conditional Appearance Module supports the following customizations. The following image illustrates various Conditional Appearance rules applied in List Views. The following image illustrates various Conditional Appearance rules applied in Detail Views. To effectively use this module in your applications, you only need to know how to define the conditional appearance rules that determine how a UI will be customized. A conditional appearance rule consists of the following parts. A unique rule identifier. ID - specifies the appearance rule’s identifier. The UI elements that will be affected. AppearanceItemType - specifies the target element’s type: a View Item (a property in a List View, Property Editor or Static Text), a Layout Item or an Action. TargetItems - specifies the target item identifier, or a semicolon-separated target item identifiers list. You can use the “*” symbol to target all items. When customizing an Action, specify its ActionBase.Id value. The appearance customization is specified by the following rule properties: BackColor, FontColor, FontStyle, Enabled and Visibility. The conditions under which an appearance rule is in effect are reflected by the following properties. Context - specifies Views where a rule is applied. These Views include Detail Views, List Views, specific Detail and List Views, and any View of the target business class. Criteria - specifies the criteria that must be satisfied by the target object. Method - specifies the method returning a Boolean value. When this method returns true, the rule is active; otherwise, the rule is inactive. Priority - specifies the order in which rules are applied, when several rules affect the same UI element simultaneously. Conditional appearance rules can be declared using one of the following three approaches. - In code. Refer to the Declare Conditional Appearance Rules in Code topic for details. - In the Model Editor. Refer to the Declare Conditional Appearance Rules in the Application Model topic for details. - Dynamically. Refer to the AppearanceController.CollectAppearanceRules event description for details. Deprecated Scenarios Hide or Disable UI Elements to Secure Data Appearance rules are applied at the UI level and have no effect on editors that do not support Conditional Appearance. So, it is recommended to use the Security System to secure (disable editing or hide) data on a per-user basis. However, you can use Conditional Appearance in scenarios that are not completely covered by the Security System (e.g., disable/hide editors based on data that is not yet committed). Show and Hide Actions Dynamically Do not show/hide Actions dynamically, depending on the current object state - it produces a poor user experience and may cause errors. Instead, it is recommended that you manage the Action’s enabled state, which is a common practice in business applications. For instance, imagine a situation when a user tries to click an Action that is about to be hidden. In this situation, neighboring Actions automatically change their screen position, and it is possible that the end-user might accidentally click the wrong Action. If you disable an Action instead of hide it, the screen positions of neighboring Actions will not change. Show and Hide List View Columns You may need to hide or show an entire column in a List View. This is an unnatural scenario for the Conditional Appearance module, since this module is designed to change the appearance settings of different UI elements under predefined conditions. However, you can hide or show a column if you specify a criteria that is not based on the current View’s objects. Thus, the criteria that you can specify is empty, similar to “1=1” or a Function Criteria Operator, neither of which require information on the current object. To hide or show a column, the Index property of the corresponding Views | List View | Columns | Column node is set to “-1” in the Application Model. This is performed once the List Editor’s control has been created, so if a column is hidden because the global criteria you are using in a rule returns true, the column will not be shown after this criteria returns false. In this instance, you should declare two appearance rules - one for hiding a column and the other for showing a column. The column’s visibility state will be refreshed only after the List Editor is recreated. Note that you can specify a static method (see AppearanceAttribute.Method) that will always return true if it is difficult to write a criteria in the declaration of a rule (see IAppearanceRuleProperties.Criteria). Note In ASP.NET Web Forms applications, hiding and showing List View columns may not function properly in certain scenarios. If the current implementation of the Conditional Appearance module does not meet your needs, or you have any issues, contact our support team and we will suggest an alternative implementation for your particular scenario. Manage Appearance of a Nested ListView Actions and Columns Based on the Master Object Conditional Appearance rules work in this scenario only if there is at least one record selected in a nested ListView. In other cases, hide Actions and columns using a custom View Controller, as described in the following topics:
https://docs.devexpress.com/eXpressAppFramework/113286/conditional-appearance?v=21.2
2021-10-15T22:47:56
CC-MAIN-2021-43
1634323583087.95
[array(['/eXpressAppFramework/images/conditionalappearance_listview116984.png?v=21.2', 'ConditionalAppearance_ListView'], dtype=object) array(['/eXpressAppFramework/images/conditionalappearance_detailview116985.png?v=21.2', 'ConditionalAppearance_DetailView'], dtype=object) ]
docs.devexpress.com
This topic describes the settings available for SOAtest Test Configurations. Sections include: How to Configure and Share Test Configurations The general procedures related to configuring and sharing Test Configurations are standardized across the Parasoft Test family, and are discussed in Configuring Test Configurations and Rules for Policies. Scope Tab Settings: Defining What Code is Tested For source code static analysis only. During a test, SOAtest will perform the specified action(s) on all code in the selected resource that satisfies the scope criteria for the selected Test Configuration. By default, SOAtest operates on all files in the selected project or asset. However, you can use the Scope tab to configure restrictions such as: - Test only files or lines added or modified after a given date. - Test only files or lines added or modified on the local machine. - Test only files modified by a specific user. - Test only files that match certain filter criteria Note that some file filters and line filters are only applicable if you are working with projects that are under supported source control systems. The Scope tab has the following settings: - File Filters: Restricts SOAtest from testing files that do not meet the specified timestamp and/or author criteria. - Time options: Restricts SOAtest from testing files that do not meet the specified timestamp criteria. Available time options include: - No time filters: Does not filter out any files based on their last modification date. - Test only files added or modified since the cutoff date: Filters out files that were not added or modified since the cutoff date. - Test only files added or modified in the last n days: Filters out files that were not added or modified in the specified time period. - Test only files modified between working and ____: Filters out files that were not modified between the working developer branch (in the workspace) and the specified branch (or the default integration stream detected, if that option is enabled). The stream name can be any stream from the parent’s hierarchy of developer working streams. The default integration stream is the parent stream of the developer working stream. For example, if you have a stream hierarchy of [Main] --- [Integration] --- [Developer], then Integrationis the default integration stream for the Developerstream. This is currently supported for SVN, AccuRev, and Clear Case. - Test only files added or modified locally: Filters out files that were not added or modified on the local machine. This feature only applies to files that are under supported source control systems. - Author options: Restricts SOAtest from testing files that do not meet the specified author criteria. Available author filter options include: - No author filters: Does not filter out any files based on their author. - Test only files authored by preferred user: Filters out any files that were not authored by the specified user (i.e., filters out any files that were authored by another user). - Path options: Configures SOAtest to filter in or out files that match the specified filter criteria. Use the "accept" filter to specify the types of files you want to include. Use the "reject" filter to specify the types of files you want to exclude. For code review test configurations, these filters will be pre-populated with common filtering options. Filter Tips and Examples Tips - Perl-style expressions can be used. - The following wildcards are supported: - * matches 0 or more characters except '/'. - ? matches any single character except '/'. - ** matches 0 or more characters, including '/'. This allows you to include path elements. - The following sample elements are added by default to the Code Review configuration: - **/bin/**/*.properties is added to the sample list of rejected wildcards. - (.*?/(bin|obj)(/x86|/x64){0,1}/(Debug|Release)/.*?\\.(dll|exe|pdb))$ is added to the sample list of rejected regexps. Regular expressions can be used to identify specific differences. For instance, if you want to flag only source code changes that add, remove, or modify TODO tags, you would use the Differences regular expression .*TODO.* Examples A basic file mask might be: - *.java, *.xml, *.properties - *.c, *.cpp, *.h, *.cc, *.hpp, makefile, .project, .classpath - *.c, *.cpp, *.h, *.cc, *.hpp, makefile, *.sln, *.prj, *.res - *.cs, *.vb, *.sln, *.prj, *.resx To include every file whose path has a folder named "bank" or "customer", use: - **/bank/**, **/customer/** To include every file whose path has a folder with a name that either starts with "bank", includes "customer", or ends with "invoice", use: - **/bank*/**, **/*customer*/**, **/*invoice/** To include every .java file that 1) has name that starts with "Test", and 2) is located in a folder named "security" (which is within the src/test directory of any project), use: **/src/test/**/security/Test*.java To include every .cs file that 1) is in the ATM solution, 2) is in the ATMLib project, 3) is within the CompanyTests subfolder, 4) has a name that starts with "Test", use: ATM/ATMLib/CompanyTests/**/Test*.cs - Line filters: Restricts the lines of code that SOAtest operates on. The file filter is applied first, so code that reaches the line filter must have already passed the file filter. Available line filter options include: - Time options: Restricts SOAtest from testing lines of code that do not meet the specified timestamp criteria. Available time options include: - No time filters: Does not filter out any lines of code based on their last modification date. - Test only lines added or modified since the cutoff date: Filters out lines of code that were not added or modified since the cutoff date. This feature only applies to files that are under supported source control systems. - Test only lines added or modified in the last n days: Filters out lines of code that were not added or modified in the specified time period. - Test only lines added or modified locally: Filters out lines of code that were not added or modified on the local machine. This feature only applies to files that are under supported source control systems. - Author options: Restricts SOAtest from testing lines of code that do not meet the specified author criteria. Available author filter options include: - No author filters: Does not filter out any lines of code based on their author. - Test only files authored by users: Filters out any files that were not authored by the specified users. For example, you can use this to focus on files that you—or a selected group of teammates—worked on. To specify multiple users, use a comma-separated list (for example: matt, tom, joe). Configuring Scope and Authorship Code authorship information and last modified date is determined in the manner set in the Scope and Authorship preferences page; for details about available settings, see Configuring Task Assignment and Code Authorship Settings. Static Tab Settings: Defining How Static Analysis is Performed During a test, SOAtest will perform static analysis based on the parameters defined in the Test Configuration used for that test. The Static tab has the following settings: - Enable Static Analysis: Determines whether SOAtest performs static analysis, which involves checking whether the selected resources follow the rules that are enabled for this Test Configuration. - Limit maximum number of tasks reported per rule to: Determines whether SOAtest SOAtest applies the specified suppressions. If suppressions are not applied, SOAtest will report all violations found. - Rules tree: Determines which rules are checked during static analysis. Use the rules tree and related controls to indicate which rules and rule categories you want checked during static analysis. - To view a description of a rule, right-click the node that represents that rule, then choose View Rule Documentation from the shortcut menu. - To view a description of a rule category, right-click the node that represents that rule category, then choose View Category Documentation from the shortcut menu. To enable or disable all rules in a specific rule category or certain types of rules within a specific rule category, right-click the category node, then choose Enable Rules>[desired option] or Disable Rules> [desired option]. - To search for a rule, click the Find button, then use that dialog to search for the rule. - SOAtest, choose Help> Help Contents, then open the SOAtestableDocs button. Execution Tab Settings: Defining How Tests are Executed During a test, SOAtest will execute test cases based on the parameters defined in the selected Test Configuration’s Execution tab. For any level of execution, the top-level Enable Test Execution option must be enabled. Functional tab The Execution> Functional tab has the following settings: - Execute functional tests: Determines whether any functional tests are run. - Enable event logging: Determines whether SOAtest logs the data needed to provide a detailed chronologist sequence of all the events that occurred between the start and end of the test (for instance, all requests sent, responses received, data source rows used, wait times, navigation tasks, and so on). See Exploring Test Event Details for details. - Execute in load test mode: Determines whether SOAtest executes available tests in load testing mode and alerts you to any outstanding issues that might impact your load testing—for example, incorrectly configured HTTP requests. See Validating Tests for details. - Auto-configure tests in preparation for load testing: Determines whether SOAtest configures browser-based web scenariosto run in a browser-less load test environment. See Configuring Tests for details. - Execute only opened Test Suite (.tst) Files (always false in command-line mode): Determines whether SOAtest executes Test Suites that are not currently active (i.e., tests that you are not currently working on). - Report traffic for all tests: Determines whether reports contain a "Test Traffic [All Tests]" section, which contains traffic for every test execution—whether or not it was successful. If this is enabled, you can also configure the traffic limit: the amount of traffic that will be stored during a test execution session (not per test). The default is 500 KB. - Launch an application: Allows you to configure a SOAtest test configuration to run an Eclipse launch configuration at the beginning of the execution of the test configuration. For example, assume you want to run a test scenario against a local copy of an application that you start and run within Eclipse. If you want to start your application and run your tests in a single step, you can create a Test Configuration to launch this application as well as execute tests. - Override default environment during test execution: Configures SOAtest to always use the specified environment for tests run with this Test Configuration—regardless of what environment is active in the Test Case Explorer. For example, assume you have the following environments: This is how you set the Test Configuration to always use the "staging server" environment: - Use playback engine: Allows you to override a test’s playback engine settings at the time of test execution. By default, Test Configurations are set to play web scenarios using the playback engine specified at the test suite level. This allows you to use a single Test Configuration to execute a mixture of tests configured for Selenium and tests configured for the legacy engine. If you select a specific driver here, it will be used regardless of what engine is configured at the test scenario level. See Using Selenium WebDriver for Legacy Browser Recordings and Using the Legacy Native Driver Instead of Selenium for details. - Use browser: Allows you to override a test’s browser playback settings at the time of test execution. See Configuring Browser Playback Options for details. - Apply static analysis to: If a Test Configuration runs both static analysis and test execution (e.g., for performing static analysis on a web scenario), this setting determines whether static analysis is performed on the HTTP responses, or the browser contents. - HTTP Responses refers to the individual HTTP messages that the browser made in order to construct its data model—the content returned by the server as is (before any browser processing). - Browser-Constructed HTML refers to the real-time data model that the browser constructed from all of the HTML, JS, CSS, and other files it loaded. Security tab The Execution> Security tab allows you to configure penetration testing, which is described in Penetration Testing. Runtime Error Detection tab The Execution> Runtime Error Detection tab allows you to configure runtime error detection, which is described in Performing Runtime Error Detection. Change Impact tab The Execution> Change Impact tab contains an option (Perform change impact analysis) that controls whether the current Test Configuration performs change impact analysis during test execution. See Updating Messages with Change Advisor for details. API Coverage tab The Execution> API Coverage tab contains options for calculating API Coverage during test execution. See API Coverage for details. Application Coverage tab The Execution> Application Coverage tab contains options for collecting application coverage data, which provides visibility into the level of code coverage achieved by your SOAtest tests. See Application Coverage for details. Common Tab Settings: Defining Common Options that Affect Multiple Analysis Types The Test Configuration’s Common tab controls test settings for actions that affect multiple analysis types. The Common tab has the following settings: - Override Session Tag: Assigns the specified session tag to results from test runs performed using the current Test Configuration. This overrides the session tag specified in Preferences> Parasoft> Reports. This value is used for uploading summary results to Team Server. The tag is an identifier of the module checked during the analysis process. Reports for different modules should be marked with different tags. The same variables that are valid for Parasoft Test Preferences options can be used here. - Before Testing> Refresh projects: Determines whether projects are refreshed before they are tested. When a project is refreshed, SOAtest checks whether external tools have changed the project in the local file system, and then applies any detected changes. Note that when you test from the command line, projects are always refreshed before testing. - Before Testing> Update projects from source control: Determines whether projects are updated from source control (if you using a supported source control system) before they are tested. - Build: Determines if and when whether projects are built before they are tested. Note that this settings applies to GUI tests, not command-line tests. Available options include: - Full (rebuild all files): Specifies that all project files should always be rebuilt. - Incremental (build files changed since last build): Specifies that only the project files that have changed since the previous build should be rebuilt. - Stop testing on build errors: Specifies that testing should stop when build errors are reported. - After Testing> Commit added/modified files to source control if no tasks were reported: Allows you to combine your testing and your source control checkins into a single step. For example, you would enable this if you want to run static analysis on files, then have SOAtest automatically check in the modified files if no static analysis tasks are reported. In the context of functional testing, it tells SOAtest that if you run modified tests—and they pass—it should check the modified tests into source control. Code Review Tab Settings: Defining Code Review Options This tab contains settings for automating preparation, notification, and tracking the peer review process, which can be used to evaluate critical SDLC artifacts (source files, tests, etc.) in the context of the organization’s defined quality policies. For details on configuring Code Review (including details on Code Review tab options), see Code Review. Goals Tab Settings: Defining Error Reporting and Resolution Targets The team manager can specify a reporting limit (such as "Do not report more than 25 static analysis tasks per developer per day") and/or a quality goal (such as "All static analysis violations should be fixed in 2 months"). SOAtest will then use the specified criteria to select a subset of testing tasks for each developer to perform each day. These goals are specified in the Goals tab. Progress towards these goals can then be monitored in reports. Alternatively, you can set global team goals—goals that may span across multiple Test Configurations and even across Parasoft Test products—as described in Configuring Task Goals. This requires Team Server and a Parasoft Test Automation edition license. If goals are set globally, the Goals tab in the Test Configuration panel will be disabled. The Goals tab has the following settings: Static tab - Perform all tasks: Specifies that you want SOAtest to report all static analysis tasks it recommends, and the team should perform all static analysis tasks immediately. - Don’t perform tasks: Specifies that you want SOAtest to report all static analysis tasks it recommends, but the team is not required to perform all static analysis tasks immediately. This is useful, for instance, if you want to see all recommended static analysis tasks, but you want the team to focus on fixing test failures before addressing static analysis violations. - No more than n tasks per developer by date: Specifies that you want each developer to have only n static analysis tasks by the specified date. - Max tasks to recommend: Limits the number of static analysis. Execution tab - Perform all tasks: Specifies that you want SOAtest to report all functional testing tasks, and the team should perform the specified tasks immediately. - Don’t perform tasks: Specifies that you want SOAtest to report all functional testing tasks, but the team is not required to perform the specified tasks immediately. This is useful, for instance, if you want to see a list of all necessary functional testing tasks, but you want the team to focus on fixing static analysis tasks before addressing functional test failures. - No more than n tasks per developer by date: Specifies that you want each developer to have only n functional testing tasks by the specified date. - Max tasks to recommend: Limits the number of functional testing.
https://docs.parasoft.com/pages/viewpage.action?pageId=51913199
2021-10-16T00:06:06
CC-MAIN-2021-43
1634323583087.95
[]
docs.parasoft.com
(Queen Wilhelmina of Holland broadcasting over the BBC from London to her country during WWII) England has had a long and tortured history as she related to the European continent – always asking the question: should we become involved or not? We can see it after World War II and the developing Common Market, and of course with the recent Brexit vote. The dark days during the spring of 1940 when the Nazis rolled over France and the Low countries presented the problem anew, but this time after sitting back in the late 1930s allowing Hitler carte blanche it decided to support a “community of nations” as London was made available as a sanctuary for governments overrun by the Nazis. London would become the home for the exiled governments of Poland, Norway, France, Belgium, Holland, and Czechoslovakia. These governments would band together with England to defeat Nazism and lay the basis for European cooperation after the war. One of Olson’s major themes rests with the exile communities. She affirms without the exiles work as pilots, mathematicians, intelligence operators, scientists, physicists, and soldiers who knows how the war might have turned out. Today, with the European Union under attack on the continent by certain right wing parties it is useful to explore Lynne Olson’s latest work dealing with World War II entitled, LAST HOPE ISLAND: BRITAIN, OCCUPIED EUROPE AND THE BROTHERHOOD THAT HELPED TURN THE TIDE OF WAR. (Charles de Gaulle, leader of Free French forces during WWII) Olson covers a great deal of material in her book, much is new, but some of it has appeared in past books. For example, the chapter dealing with the Battle of Britain and the London Blitz has a similar narrative that appears in A QUESTION OF HONOR: THE KOSCIUSZKO SQUADRON: THE FORGOTTEN HEROES OF WORLD WAR II as she writes about Squadron 303 made up of Polish airmen who accomplished remarkable things at a time of England’s greatest need. Other examples can be found in TROUBLESOME YOUNG MEN: THE REBELS WHO BROUGHT CHURCHILL TO POWER AND HELPED SAVE ENGLAND and CITIZENS OF LONDON: THE AMERICANS WHO STOOD WITH BRITAIN IN ITS DARKEST, FINEST HOUR. The integration of past research enhances her current effort particularly when she writes about the early part of the war. To her credit she has an amazing knowledge of the leading secondary works and historians dealing with her topic which just enhances the narrative. Olson employs a wonderful wit as part of her approach to writing. For example she quotes the novelist and former MI6 member, John le Carre as he noted how devoted MI6 had been to “the conspiracies of self-protection, of using the skirts of official secrecy in order to protect incompetence, of gross class privilege, of amazing credulity,” then remarks that “the years immediately preceding the war MI6, as it happened, had a considerable amount of incompetence to protect.” (British Prime Minister Winston Churchill) The author breaks the narrative into two separate parts. The first being the prewar period through the end of 1941 as the Germans rolled through France and the Low countries and we find a number of governments in exile stationed in London. In that section of the book Olson successfully narrates the relationship of these governments in exile first with the Chamberlain government, then that of Churchill. She explores the important personalities that include King Haakon of Norway, Queen Wilhelmina of Holland, Charles de Gaulle of the Free French, and Edvard Benes of Czechoslovakia. The problems of each are explained as well as how the British responded to their needs. Olson accurately points out the humiliation and frustration experienced by Benes who was forced not to fight during the Munich conference, then was pilloried for not fighting when Hitler seized Czechoslovakia in March, 1939. Further she explores the difficult relationship between the British and the French particularly during the evacuation from Dunkirk, as well as with de Gaulle once France fell. For the British de Gaulle could be described as the self-appointed French leader who exhibited “extreme weakness that required extreme intransigence.” King Haakon and Queen Wilhelmina got along much better with the British as each had merchant marine fleets that English needed, as well as natural resources. Olson points out the complexity of the relationship with the Polish government in exile. Of all these governments it was the Poles who fought, wanted to continue to fight, and developed the Home Army to do so. They made tremendous contributions as pilots, intelligence sources, and creating a resistance against Nazi Germany. (Exiled Polish pilots from Squadron 303 who assisted England during the Battle of Britain) Olson does a commendable job explaining the incompetence of the British and French military leadership who instead of accepting responsibility for events that led to Dunkirk used Belgium as their scapegoat for their own failures and defeat. Showering King Leopold as a “Quisling” was blasphemy for the king whose army fought as well as possible based on the resources at his command, and further, refused to surrender to the Germans. Olson also argues that the myth that the French just gave up was unfair based on the lack of support the British provided as the Germans goose-stepped into Paris. The importance of the BBC is given its own chapter which is important because the radio broadcasts had an important role to play. First, it allowed exiled leaders the opportunity to broadcast their own message to their people. Second, it provided the various resistance movements accurate information as to the course of the war. Third, they broadcasted in over forty languages. Lastly, it gave hope to demoralized population, particularly in France as they told the truth. (King Haakon VII of Norway) By December 1941 the governments in exile came to the realization that with the entrance of the United States and the Soviet Union the entire diplomatic formula was dramatically altered. With the Americans and Russians now in the war, their early closeness with Great Britain was about to give way to power politics, and perhaps a European Union might be in the offering. From this point on Olson’s focus begins to change. Olson spends a great deal of time taking apart the reputations of British MI6 and their Special Operations Executive. She delves into the lack of competence exhibited by MI6 head Stewart Menzies and his battle with SOE leadership whose task was to foment sabotage, subversion and resistance in Europe. In chapters dealing with Holland and France, Olson points out the errors that SOE leaders engaged in including a lack of security and simplistic coding, and foolish field decisions involving their agents. London’s poor decision making would prove disastrous for Dutch agents who were easily rounded up by the Germans as they parachuted into Holland. Olson is meticulous as she undermines the myth of the excellence of British secret services and the negative impact on events in Holland and France. Two men stand out in her narrative, Leo Marks and Frances Cammaerts who were “passionate, skeptical, and [possessed] fiercely independent traits unappreciated by the SOE brass.” The problem was this weak intelligence infrastructure created issues for the French resistance that was to play a major role in D Day planning and the early stages of the invasion as many suffered horrendous death at the hands of the SS. Further complicating things was the split between the French resistance and de Gaulle, and the British and de Gaulle. In both cases endangering the overall invasion. (Czechoslovakia’s leader Edvard Benes) Olson is at her best when she integrates stories about certain figures who seem to be on the periphery of the main narrative, but are involved in important actions. For example Andree de Vongh, an independent woman who decided to ignore SOE objections and developed the “Comet Line” an escape route for British airmen and paratroopers that began in Brussels, snaked its way through France, and crossed the Pyrenees into Spain. She organized safe houses along the route and when MI9 refused to give her funds she raised them on her own. She personally escorted 118 servicemen to freedom out of 7000 total for all networks during the war. If reading about de Vongh is not interesting enough, Audrey Kathleen Ruston, a thirteen year old aspiring dancer and Dutch resistance member emerges, a.k.a Audrey Hepburn. One of the major debates that historians seem to engage in is how valuable were resistance movements in winning the war. Though some argue not as much as one might think, Olson makes the case throughout that they were very consequential. The Poles in particular who contributed to breaking the Enigma code and intelligence collected by their spies throughout Europe were of great importance to the Allied victory. The Poles who seemed to have given so much received very little as the war wound to a close, and in the postwar world. It was unfortunate that they became pawns between Stalin’s strategic view of Soviet national security in Eastern Europe, and Roosevelt’s desire not to upset the Russian dictator whose army suffered an inordinate number of casualties compared to England and the United States. When Polish exile leaders appealed to Churchill, no matter what the English Prime Minister believed, he could do little to convince his allies to assist the Poles as the Nazis were about to destroy what remained of Warsaw in May, 1944. As far as the French are concerned General Eisenhower argues that the resistance was “of inestimable value…without their great assistance, the liberation of France would have consumed a much longer time and meant greater losses to ourselves.” Olson summarizes her view nicely as she quotes historian Julian Jackson, “there was indeed a Resistance myth which needed to be punctured, but that does not mean that the Resistance was a myth.” (British General Bernard Montgomery, 1943) When evaluating the Dutch contribution Olson correctly takes General Bernard Montgomery to task. Montgomery had a large sense of self, arrogant and stubborn as he refused to take into account Dutch intelligence concerning the retaking of the port of Antwerp. Rather than securing the Scheldt River estuary before moving on to Operation Market Basket, Montgomery had his eye on racing to Berlin before the Americans or Russians arrived. As a result the Germans lay in wait, and Arnhem would become a trap leading to a fiasco which Montgomery’s over-sized ego caused.. “As a result, many more people would die, soldiers, and civilians alike. For the Netherlands, the consequences would be dire” as the Allies controlled southern Holland, but the Nazis the northern cities and they took out their retribution on the populations of Rotterdam, Amsterdam, the Hague, Utrecht, and others. The latter part of the book evolves into a narrative of the last year of the war. Olson covers the salient facts and personalities as she tries to maintain to her “exile” theme. If one were to pick which character she was most impressed with it would be Queen Wilhelmina and the Dutch people. Olson points out the errors that politicians made and how their decisions impacted the post war world particularly Czechoslovakia as Patton’s Third Army stood outside Prague and waited to allow the Soviet army march in. This along with Poland plight reflects Roosevelt, Truman, and Eisenhower’s desire not to allow political implications affect how they decided to deploy American soldiers. Olson’s new book is an excellent read, a combination of straight narrative, interpretive, and empathetic history that all can enjoy. (Holland’s Queen Wilhelmina returning to her country after WWII)
https://docs-books.com/2017/09/13/last-hope-island-britain-occupied-europe-and-the-brotherhood-that-helped-turn-the-tide-of-war-by-lynne-olson/
2021-10-15T22:34:45
CC-MAIN-2021-43
1634323583087.95
[]
docs-books.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Metadata attributes of the time series that are written in each measure record. Namespace: Amazon.IoT.Model Assembly: AWSSDK.IoT.dll Version: 3.x.y.z The TimestreamDimension type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TTimestreamDimension.html
2021-10-16T01:24:00
CC-MAIN-2021-43
1634323583087.95
[]
docs.aws.amazon.com
CHT Applications > Reference > forms/ > app : Formsappform Namepropertiesjson App Forms: Used to complete reports, tasks, and actions in the app This tutorial will take you through how to write the <form_name>.properties.json file. The <form_name>.properties.json file allows you to add logic that ensures that the right action appears for the right contacts (people and places). For instance, an assessment form for children under-5 will only appear for person contacts on the CHT whose age is less than 5. You will be adding meta-data and context to an assessment workflow that allows Community Health Workers to conduct a health assessment for children under the age of 5. Form context defines when and where the form should be available in the app. You should have a functioning CHT instance with cht-conf installed locally, completed a project folder setup, and an assessment form. Create a new file in the same folder as your assessment.xlsx file and name it assessment.properties.json. Edit the assessment.properties.json file and add a title key with the value corresponding to the desired file title. { "title": "Assessment" } Add a resources folder in your project folder and put your preferred icon for assessment in it. Name the icon file icon-healthcare-assessment.png if it is a png file or icon-healthcare-assessment.svg if it is an svg file. Create a resources.json file in your project folder and add key/value pairs for your icon resources. { "icon-healthcare-assessment": "icon-healthcare-assessment.png" } See Also: Icon Library Add an icon key in the assessment.properties.json file. Pick the key of the icon you require from the resources.json file and add it as the icon value. { "title": "Assessment", "icon": "icon-healthcare-assessment" } First, add a context key in the assessment.properties.json file. Next, add an object with person, place and expression keys. Then, add the boolean value true for the person key, the boolean value false for the place key and the expression ageInYears(contact) < 5 for the expression key. { "title": "Assessment", "icon": "icon-healthcare-assessment", "context": { "person": true, "place": false, "expression": "ageInYears(contact) < 5" } } <form_name>.properties.jsonFile Run the following command from the root folder to upload the resources folder and resources.json file: cht --url=https://<username>:<password>@localhost --accept-self-signed-certs upload-resources Run the following command from the root folder to upload the assessment.properties.json file: cht --url=https://<username>:<password>@localhost --accept-self-signed-certs upload-app-forms -- assessment <username>and <password>with the actual username and password of your test instance. Once you successfully upload the assessment.properties.json file, ‘Assessment’ will appear as an action only for person contacts who are less that 5 years old. Additionally, the icon-healthcare-assessment icon will now show alongside the action name. App Forms: Used to complete reports, tasks, and actions in the app This document covers the configuration best practices of forms, tasks, targets, and contact profiles when building your own community health app. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.communityhealthtoolkit.org/apps/tutorials/form-properties/
2021-10-15T23:06:33
CC-MAIN-2021-43
1634323583087.95
[]
docs.communityhealthtoolkit.org
Getting Started Composable code promotion is run through a C# console application that manages resource syncing between instances of Composable. The main requirement is a machine with Composable and Visual Studio installed which can access both the source and target instances of Composable but which is not itself hosting either of those instances. Composable credentials for each instance are also required. Create a Console App To begin setting up code promotion for a Composable project, open Visual Studio and create a new project, choosing "Console App (.NET Framework)" as the project template. This should give you a simple console app which will become the base for code promotion. Reference the Composable API Next add a reference to CompAnaltyics.IServices.dll. To add a reference in Visual Studio, right click on the References header in the Solution Explorer and select "Add Reference...". Then click the "Browse..." and navigate to the desired dll. CompAnalytics.IServices.dll can usually be found in C:\Program Files\CompAnalytics\WebApp\bin. If Composable is installed in another location look for \WebApp\bin underneath the top level Composable directory. Check App.config If your App.config file doesn't have a <webHttpBinding> section already, add one like this (inside <configuration> but outside any other sections): <system.serviceModel> <bindings> <webHttpBinding> <binding maxReceivedMessageSize="2147483647" receiveTimeout="00:30:00" sendTimeout="00:30:00"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647"/> </binding> </webHttpBinding> </bindings> </system.serviceModel> Once these have been added, add the lines using System.Net; using CompAnalytics.IServices.Deploy; to the top of Program.cs. Now you're ready to begin writing the code that specifies what resources to deploy and how. Folder Syncer Here's a simplified version of the code you need to sync resources, which you would place in Program.Main: var syncSettings = new SyncSettings(); var syncer = new FolderSyncer(syncSettings); syncer.Sync(); The method FolderSyncer.Sync will perform all of the syncing operations, retrieving resources from the source instance, applying appropriate transformations, and saving them to the target instance. Progress messages will print in the console while this runs. In this oversimplified form the deployment will fail, however, because some settings in SyncSettings are required. Required Settings A working settings object will look something like this, where some settings reference variables defined earlier in the code and discussed in more detail below. var syncSettings = new SyncSettings { SourceConnectionSettings = devConnectionSettings, TargetConnectionSettings = prodConnectionSettings, Folders = new List<FolderMapping> { new FolderMapping("/SourceFolder", "/TargetFolder") }, ExtensionAssemblies = new List<string> { "CompAnalytics.Execution.Modules" } } Other options also exist that can be configured via SyncSettings but without these 4 properly configured the console app will fail. Connection Settings The first required settings are for connecting to the source and destination Composable instances. These connections will look something like this: var devConnectionSettings = new ConnectionSettings { Uri = new Uri("https://<composable-instance-url>/CompAnalytics/"), AuthMode = AuthMode.Form, FormCredential = new NetworkCredential("<user-name>", "<password>") }; The Uri should be the root of the Composable instance. Create a ConnectionSettings instance for each Composable instance you need to connect to. If the code promotion is all within one server and just separated by folders then only one ConnectionSettings object is required. ConnectionSettings offers three authentication modes: AuthMode.Form: Forms authentication, with a username and password specific to Composable. FormCredentialis required. AuthMode.Windows: Windows authentication, if you log in to Composable with your Windows account. WindowsCredentialis optional; if it is not included the Windows account running the console app will be used (via CredentialCache.DefaultNetworkCredentials). AuthMode.Hybrid: Windows and forms authentication, in which a windows account is used to access the server but actions in Composable use a separate forms account. FormCredentialand WindowsCredentialare required. Note that when passing a WindowsCredential it may be necessary to also specify the domain in the NetworkCredential constructor as new NetworkCredential("<user-name>", "<password>", "<domain>"). Folders These define what resources will be synced and to where. SyncSettings takes a list of FolderMapping objects, which each defines a folder to include in the sync. In this simple example: Folders = new List<FolderMapping> { new FolderMapping("/SourceFolder", "/TargetFolder") }, all the QueryViews and DataFlows in the folder /SourceFolder on the source instance will be saved to the folder /TargetFolder on the destination instance. If you want the folder structure to be the same on the target as it is on the source, use the shortcut method SyncSettings.Create to generate the list by passing all the folder paths: Folders = SyncSettings.Create( "/Folder1", "/Nested/Folder2" ) Extension Assemblies SyncSettings.ExtensionAssemblies is a list of namespaces containing the modules used in any DataFlows that will be synced. For example, if you use a SQL Query module, you would add "CompAnalytics.Execution.Modules" to your ExtensionAssemblies list. All of these assemblies will be loaded during the sync to facilitate the deserialization of DataFlow contracts. Thus you must also add a reference in your project to every dll included in ExtensionAssemblies. These dlls can usually be found alongside CompAnalytics.IServices.dll in \WebApp\bin or in \WebApp\bin\plugins, but if the Composable instances you're syncing have custom extensions that introduce new modules you may need to acquire those dlls separately to reference them. Run the Console App Once you have all these pieces together, your Program.cs file should look something like this: using System.Net; using CompAnalytics.IServices.Deploy; namespace DemoDeployment { class Program { static void Main(string[] args) { var devConnectionSettings = new ConnectionSettings { Uri = new Uri("https://<composable-dev-instance-url>/CompAnalytics/"), AuthMode = AuthMode.Form, FormCredential = new NetworkCredential("<user-name>", "<password>") }; var prodConnectionSettings = new ConnectionSettings { Uri = new Uri("https://<composable-prod-instance-url>/CompAnalytics/"), AuthMode = AuthMode.Form, FormCredential = new NetworkCredential("<user-name>", "<password>") }; var syncSettings = new SyncSettings { SourceConnectionSettings = devConnectionSettings, TargetConnectionSettings = prodConnectionSettings, Folders = new List<FolderMapping> { new FolderMapping("/SourceFolder", "/TargetFolder") }, ExtensionAssemblies = new List<string> { "CompAnalytics.Execution.Modules" } }; var syncer = new FolderSyncer(syncSettings); syncer.Sync(); Console.WriteLine("Deployment complete"); Console.ReadLine(); } } } The final two lines are just to provide confirmation that everything has finished and provide an opportunity to review all the progress messages that printed before closing the window (if run in a context where the window persists only with the application, as is the case when running in Visual Studio). Additionally you should have references in your project to CompAnalytics.IServices and CompAnalytics.Execution.Modules. Once those conditions are met, you can build and run the app and it should sync all QueryViews and DataFlows from /SourceFolder on the dev instance to /TargetFolder on the prod instance.
https://docs.composable.ai/en/latest/Code-Promotion/02.Setup/
2021-10-15T22:43:24
CC-MAIN-2021-43
1634323583087.95
[]
docs.composable.ai
Communication¶ Communication in POP is abstract, multiple communication protocols can be used at the same time. Those protocols are abstracted by the Combox object and its companion, a 3 generic and 3 helper classes for a total of 6. POP-Java plugin explains how to create a new user generated Combox while in this chapter we want to explain what’s behind a standard Combox and what happen between a Client’s Interface and the POP Object’s Broker.
https://pop-java.readthedocs.io/en/v2.0.0/dev/communication/
2021-10-15T22:53:20
CC-MAIN-2021-43
1634323583087.95
[]
pop-java.readthedocs.io
Implementing a game rule system¶ The simplest way to create an online roleplaying game (at least from a code perspective) is to simply grab a paperback RPG rule book, get a staff of game masters together and start to run scenes with whomever logs in. Game masters can roll their dice in front of their computers and tell the players the results. This is only one step away from a traditional tabletop game and puts heavy demands on the staff - it is unlikely staff will be able to keep up around the clock even if they are very dedicated. Many games, even the most roleplay-dedicated, thus tend to allow for players to mediate themselves to some extent. A common way to do this is to introduce coded systems - that is, to let the computer do some of the heavy lifting. A basic thing is to add an online dice-roller so everyone can make rolls and make sure noone is cheating. Somewhere at this level you find the most bare-bones roleplaying MUSHes. The advantage of a coded system is that as long as the rules are fair the computer is too - it makes no judgement calls and holds no personal grudges (and cannot be accused of holding any). Also, the computer doesn’t need to sleep and can always be online regardless of when a player logs on. The drawback is that a coded system is not flexible and won’t adapt to the unprogrammed actions human players may come up with in role play. For this reason many roleplay-heavy MUDs do a hybrid variation - they use coded systems for things like combat and skill progression but leave role play to be mostly freeform, overseen by staff game masters. Finally, on the other end of the scale are less- or no-roleplay games, where game mechanics (and thus player fairness) is the most important aspect. In such games the only events with in-game value are those resulting from code. Such games are very common and include everything from hack-and-slash MUDs to various tactical simulations. So your first decision needs to be just what type of system you are aiming for. This page will try to give some ideas for how to organize the “coded” part of your system, however big that may be. Overall system infrastructure¶ We strongly recommend that you code your rule system as stand-alone as possible. That is, don’t spread your skill check code, race bonus calculation, die modifiers or what have you all over your game. Put everything you would need to look up in a rule book into a module in mygame/world. Hide away as much as you can. Think of it as a black box (or maybe the code representation of an all-knowing game master). The rest of your game will ask this black box questions and get answers back. Exactly how it arrives at those results should not need to be known outside the box. Doing it this way makes it easier to change and update things in one place later. Store only the minimum stuff you need with each game object. That is, if your Characters need values for Health, a list of skills etc, store those things on the Character - don’t store how to roll or change them. Next is to determine just how you want to store things on your Objects and Characters. You can choose to either store things as individual Attributes, like character.db.STR=34and character.db.Hunting_skill=20. But you could also use some custom storage method, like a dictionary character.db.skills = {"Hunting":34, "Fishing":20, ...}. Finally you could even go with a custom django model. Which is the better depends on your game and the complexity of your system. Make a clear API into your rules. That is, make methods/functions that you feed with, say, your Character and which skill you want to check. That is, you want something similar to this: from world import rules result = rules.roll_skill(character, "hunting") result = rules.roll_challenge(character1, character2, "swords") You might need to make these functions more or less complex depending on your game. For example the properties of the room might matter to the outcome of a roll (if the room is dark, burning etc). Establishing just what you need to send into your game mechanic module is a great way to also get a feel for what you need to add to your engine. Coded systems¶ Inspired by tabletop role playing games, most game systems mimic some sort of die mechanic. To this end Evennia offers a full dice roller in its contrib folder. For custom implementations, Python offers many ways to randomize a result using its in-built random module. No matter how it’s implemented, we will in this text refer to the action of determining an outcome as a “roll”. In a freeform system, the result of the roll is just compared with values and people (or the game master) just agree on what it means. In a coded system the result now needs to be processed somehow. There are many things that may happen as a result of rule enforcement: - Health may be added or deducted. This can effect the character in various ways. - Experience may need to be added, and if a level-based system is used, the player might need to be informed they have increased a level. - Room-wide effects need to be reported to the room, possibly affecting everyone in the room. There are also a slew of other things that fall under “Coded systems”, including things like weather, NPC artificial intelligence and game economy. Basically everything about the world that a Game master would control in a tabletop role playing game can be mimicked to some level by coded systems. Example of Rule module¶ Here is a simple example of a rule module. This is what we assume about our simple example game: - Characters have only four numerical values: - Their level, which starts at 1. - A skill combat, which determines how good they are at hitting things. Starts between 5 and 10. - Their Strength, STR, which determine how much damage they do. Starts between 1 and 10. - Their Health points, HP, which starts at 100. - When a Character reaches HP = 0, they are presumed “defeated”. Their HP is reset and they get a failure message (as a stand-in for death code). - Abilities are stored as simple Attributes on the Character. - “Rolls” are done by rolling a 100-sided die. If the result is below the combatvalue, it’s a success and damage is rolled. Damage is rolled as a six-sided die + the value of STR(for this example we ignore weapons and assume STRis all that matters). - Every successful attackroll gives 1-3 experience points ( XP). Every time the number of XPreaches (level + 1) ** 2, the Character levels up. When leveling up, the Character’s combatvalue goes up by 2 points and STRby one (this is a stand-in for a real progression system). Character¶ The Character typeclass is simple. It goes in mygame/typeclasses/characters.py. There is already an empty Character class there that Evennia will look to and use. from random import randint from evennia import DefaultCharacter class Character(DefaultCharacter): """ Custom rule-restricted character. We randomize the initial skill and ability values bettween 1-10. """ def at_object_creation(self): "Called only when first created" self.db.level = 1 self.db.HP = 100 self.db.XP = 0 self.db.STR = randint(1, 10) self.db.combat = randint(5, 10) @reload the server to load up the new code. Doing examine self will however not show the new Attributes on yourself. This is because the at_object_creation hook is only called on new Characters. Your Character was already created and will thus not have them. To force a reload, use the following command: @typeclass/force/reset self The examine self command will now show the new Attributes. Rule module¶ This is a module mygame/world/rules.py. from random import randint def roll_hit(): "Roll 1d100" return randint(1, 100) def roll_dmg(): "Roll 1d6" return randint(1, 6) def check_defeat(character): "Checks if a character is 'defeated'." if character.db.HP <= 0: character.msg("You fall down, defeated!") character.db.HP = 100 # reset def add_XP(character, amount): "Add XP to character, tracking level increases." character.db.XP += amount if character.db.XP >= (character.db.level + 1) ** 2: character.db.level += 1 character.db.STR += 1 character.db.combat += 2 character.msg("You are now level %i!" % character.db.level) def skill_combat(*args): """ This determines outcome of combat. The one who rolls under their combat skill AND higher than their opponent's roll hits. """ char1, char2 = args roll1, roll2 = roll_hit(), roll_hit() failtext = "You are hit by %s for %i damage!" wintext = "You hit %s for %i damage!" xp_gain = randint(1, 3) if char1.db.combat >= roll1 > roll2: # char 1 hits dmg = roll_dmg() + char1.db.STR char1.msg(wintext % (char2, dmg)) add_XP(char1, xp_gain) char2.msg(failtext % (char1, dmg)) char2.db.HP -= dmg check_defeat(char2) elif char2.db.combat >= roll2 > roll1: # char 2 hits dmg = roll_dmg() + char2.db.STR char1.msg(failtext % (char2, dmg)) char1.db.HP -= dmg check_defeat(char1) char2.msg(wintext % (char1, dmg)) add_XP(char2, xp_gain) else: # a draw drawtext = "Neither of you can find an opening." char1.msg(drawtext) char2.msg(drawtext) SKILLS = {"combat": skill_combat} def roll_challenge(character1, character2, skillname): """ Determine the outcome of a skill challenge between two characters based on the skillname given. """ if skillname in SKILLS: SKILLS[skillname](character1,-character2) else: raise RunTimeError("Skillname %s not found." % skillname) These few functions implement the entirety of our simple rule system. We have a function to check the “defeat” condition and reset the HP back to 100 again. We define a generic “skill” function. Multiple skills could all be added with the same signature; our SKILLS dictionary makes it easy to look up the skills regardless of what their actual functions are called. Finally, the access function roll_challenge just picks the skill and gets the result. In this example, the skill function actually does a lot - it not only rolls results, it also informs everyone of their results via character.msg() calls. Here is an example of usage in a game command: from evennia import Command from world import rules class CmdAttack(Command): """ attack an opponent Usage: attack <target> This will attack a target in the same room, dealing damage with your bare hands. """ def func(self): "Implementing combat" caller = self.caller if not self.args: caller.msg("You need to pick a target to attack.") return target = caller.search(self.args) if target: rules.roll_challenge(caller, target, "combat") Note how simple the command becomes and how generic you can make it. It becomes simple to offer any number of Combat commands by just extending this functionality - you can easily roll challenges and pick different skills to check. And if you ever decided to, say, change how to determine hit chance, you don’t have to change every command, but need only change the single roll_hit function inside your rules module.
http://evennia.readthedocs.io/en/latest/Implementing-a-game-rule-system.html
2018-03-17T06:06:25
CC-MAIN-2018-13
1521257644701.7
[]
evennia.readthedocs.io
Azure Germany security and identity services Key Vault For details on the Azure Key Vault service and how to use it, see the Key Vault global documentation. Key Vault is generally available in Azure Germany. As in global Azure, there is no extension, so Key Vault is available through PowerShell and CLI only. Azure Active Directory Azure Active Directory offers identity and access capabilities for information systems running in Microsoft Azure. By using directory services, security groups, and group policy, you can help control the access and security policies of the machines that use Azure Active Directory. You can use accounts and security groups, along with role-based access control (RBAC), to help manage access to the information systems. Azure Active Directory is generally available in Azure Germany. Variations - Azure Active Directory in Azure Germany is completely separated from Azure Active Directory in global Azure. - Customers cannot use a Microsoft account to sign in to Azure Germany. - The login suffix for Azure Germany is onmicrosoft.de (not onmicrosoft.com like in global Azure). - Customers need a separate subscription to work in Azure Germany. - Customers in Azure Germany cannot access resources that require a subscription or identity in global Azure. - Customers in global Azure cannot access resources that require a subscription or identity in Azure Germany. - Additional domains can be added/verified only in one of the cloud environments. Note Assigning rights to users from other tenants with both tenants inside Azure Germany is not yet available. Next steps For supplemental information and updates, subscribe to the Azure Germany blog.
https://docs.microsoft.com/en-us/azure/germany/germany-services-securityandidentity
2018-03-17T06:41:18
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Async Process¶ This is considered an advanced topic. Synchronous versus Asynchronous¶ Most program code operates synchronously. This means that each statement in your code gets processed and finishes before the next can begin. This makes for easy-to-understand code. It is also a requirement in many cases - a subsequent piece of code often depend on something calculated or defined in a previous statement. Consider this piece of code: print "before call ..." long_running_function() print "after call ..." When run, this will print "before call ...", after which the long_running_function gets to work for however long time. Only once that is done, the system prints "after call ...". Easy and logical to follow. Most of Evennia work in this way and often it’s important that commands get executed in the same strict order they were coded. Evennia, via Twisted, is a single-process multi-user server. In simple terms this means that it swiftly switches between dealing with player input so quickly that each player feels like they do things at the same time. This is a clever illusion however: If one user, say, runs a command containing that long_running_function, all other players are effectively forced to wait until it finishes. Now, it should be said that on a modern computer system this is rarely an issue. Very few commands run so long that other users notice it. And as mentioned, most of the time you want to enforce all commands to occur in strict sequence. When delays do become noticeable and you don’t care in which order the command actually completes, you can run it asynchronously. This makes use of the run_async() function in src/utils/utils.py: run_async(function, *args, **kwargs) Where function will be called asynchronously with *args and **kwargs. Example: from evevnnia import utils print "before call ..." utils.run_async(long_running_function) print "after call ..." Now, when running this you will find that the program will not wait around for long_running_function to finish. In fact you will see "before call ..." and "after call ..." printed out right away. The long-running function will run in the background and you (and other users) can go on as normal. Customizing asynchronous operation¶ A complication with using asynchronous calls is what to do with the result from that call. What if long_running_function returns a value that you need? It makes no real sense to put any lines of code after the call to try to deal with the result from long_running_function above - as we saw the "after call ..." got printed long before long_running_function was finished, making that line quite pointless for processing any data from the function. Instead one has to use callbacks. utils.run_async takes reserved kwargs that won’t be passed into the long-running function: at_return(r)(the callback) is called when the asynchronous function ( long_running_functionabove) finishes successfully. The argument rwill then be the return value of that function (or None). def at_return(r): print r at_return_kwargs- an optional dictionary that will be fed as keyword arguments to the at_returncallback. at_err(e)(the errback) is called if the asynchronous function fails and raises an exception. This exception is passed to the errback wrapped in a Failure object e. If you do not supply an errback of your own, Evennia will automatically add one that silently writes errors to the evennia log. An example of an errback is found below: def at_err(e): print "There was an error:", str(e) at_err_kwargs- an optional dictionary that will be fed as keyword arguments to the at_errerrback. An example of making an asynchronous call from inside a Command definition: from ev import utils from game.gamesrc.commands.basecommand import Command class CmdAsync(Command): key = "asynccommand" def func(self): def long_running_function(): #[... lots of time-consuming code return final_value def at_return(r): self.caller.msg("The final value is %s" % r) def at_err(e): self.caller.msg("There was an error: %s" % e) # do the async call, setting all callbacks utils.run_async(long_running_function, at_return, at_err) That’s it - from here on we can forget about long_running_function and go on with what else need to be done. Whenever it finishes, the at_return function will be called and the final value will pop up for us to see. If not we will see an error message. Process Pool¶ Note: The Process pool is currently not available nor supported, so the following section should be ignored. An old and incompatible version of the procpool can be found in the evennia/procpool repository if you are interested. The ProcPool is an Evennia subsystem that launches a pool of processes based on the ampoule package (included with Evennia). When active, run_async will use this pool to offload its commands. ProcPool is deactivated by default, it can be turned on with settings.PROCPOOL_ENABLED. It should be noted that the default SQLite3 database is not suitable for for multiprocess operation. So if you use ``ProcPool`` you should consider switching to another database such as MySQL or PostgreSQL. The Process Pool makes several additional options available to run_async. The following keyword arguments make sense when ProcPool is active: use_thread- this force-reverts back to thread operation (as above). It effectively deactivates all additional features ProcPooloffers. proc_timeout- this enforces a timeout for the running process in seconds; after this time the process will be killed. at_return, at_err- these work the same as above. In addition to feeding a single callable to run_async, the first argument may also be a source string. This is a piece of python source code that will be executed in a subprocess via ProcPool. Any extra keyword arguments to run_async that are not one of the reserved ones will be used to specify what will be available in the execution environment. There is one special variable used in the remote execution: _return. This is a function, and all data fed to _return will be returned from the execution environment and appear as input to your at_return callback (if it is defined). You can call _return multiple times in your code - the return value will then be a list. Example: from src.utils.utils import run_async source = """ from time import sleep sleep(5) # sleep five secs val = testvar + 5 _return(val) _return(val + 5) """ # we assume myobj is a character retrieved earlier # these callbacks will just print results/errors def callback(ret): myobj.msg(ret) def errback(err): myobj.msg(err) testvar = 3 # run async run_async(source, at_return=callback, at_err=errback, testvar=testvar) # this will return '[8, 13]' You can also test the async mechanism from in-game using the @py command: @py from src.utils.utils import run_async;run_async("_return(1+2)",at_return=self.msg) Note: The code execution runs without any security checks, so it should not be available to unprivileged users. Try contrib.evlang.evlang.limited_exec for running a more restricted version of Python for untrusted users. This will use run_async under the hood. delay¶ The delay function is a much simpler sibling to run_async. It is in fact just a way to delay the execution of a command until a future time. This is equivalent to something like time.sleep() except delay is asynchronous while sleep would lock the entire server for the duration of the sleep. def callback(obj): obj.msg("Returning!") delay(10, caller, callback=callback) This will delay the execution of the callback for 10 seconds. This function is explored much more in the Command Duration Tutorial. Assorted notes¶ Note that the run_async will try to launch a separate thread behind the scenes. Some databases, notably our default database SQLite3, does not allow concurrent read/writes. So if you do a lot of database access (like saving to an Attribute) in your function, your code might actually run slower using this functionality if you are not careful. Extensive real-world testing is your friend here. Overall, be careful with choosing when to use asynchronous calls. It is mainly useful for large administration operations that have no direct influence on the game world (imports and backup operations come to mind). Since there is no telling exactly when an asynchronous call actually ends, using them for in-game commands is to potentially invite confusion and inconsistencies (and very hard-to-reproduce bugs). The very first synchronous example above is not really correct in the case of Twisted, which is inherently an asynchronous server. Notably you might find that you will not see the first before call ... text being printed out right away. Instead all texts could end up being delayed until after the long-running process finishes. So all commands will retain their relative order as expected, but they may appear with delays or in groups. Further reading¶ Technically, run_async is just a very thin and simplified wrapper around a Twisted Deferred object; the wrapper sets up a separate thread and assigns a default errback also if none is supplied. If you know what you are doing there is nothing stopping you from bypassing the utility function, building a more sophisticated callback chain after your own liking.
http://evennia.readthedocs.io/en/latest/Async-Process.html
2018-03-17T06:02:17
CC-MAIN-2018-13
1521257644701.7
[]
evennia.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Returns the evaluation results for the specified AWS Config rule. The results indicate which AWS resources were evaluated by the rule, when each resource was last evaluated, and whether each resource complies with the rule. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to GetComplianceDetailsByConfigRuleAsync. Namespace: Amazon.ConfigService Assembly: AWSSDK.ConfigService.dll Version: 3.x.y.z Container for the necessary parameters to execute the GetComplianceDetails
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ConfigService/MIConfigServiceGetComplianceDetailsByConfigRuleGetComplianceDetailsByConfigRuleRequest.html
2018-03-17T06:45:46
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
Push API \ Publish Tweet On Twitter PHP SDK Push API Resources Workflow Request: the code to send to the API Send a POST request with the data below to the endpoint /push/identities/<identity_token>/twitter/post.json to publish a new Tweet on behalf of a Twitter user. The <identity_token> is obtained whenever one of your users connects using a social network account. To be able to use this endpoint Twitter must be fully configured for your OneAll Site and the setting Permissions \ Access of your Twitter app must be set to Read and Write. POST data to include in the request { "request":{ "push":{ "post":{ "message": "#message#", /twitter/post.json", "status":{ "flag": "success", "code": 200, "info": "Your request has been processed successfully" } }, "result":{ "data":{ "provider": "twitter", "object": "post", "post_id": "910891858989060096", "post_location": "" } } } }
http://docs.oneall.com/api/resources/push/twitter/post/
2018-03-17T06:25:49
CC-MAIN-2018-13
1521257644701.7
[]
docs.oneall.com
GUI Automation Commands and Functions Provided here are brief descriptions and links to more information about SenseTalk commands and functions you need to get started with GUI automation. For more information about SenseTalk, see the SenseTalk Reference. SenseTalk GUI Automation Commands and Functions User Interaction: Use these commands and functions to prompt for user input during runtime. Mouse Events and Control: These commands and functions control the SUT's mouse in the eggPlant Functional Viewer window. Keyboard and Clipboard Events: These commands and functions interact with the SUT's keyboard and clipboard. Text Encryption: These commands and functions encrypt sensitive text, like passwords, for use in scripts. OCR Text-Reading Functions: This page describes the SenseTalk commands and functions that work with text recognition through OCR. Image and OCR Searches: These commands and functions search for images or text (OCR) in the eggPlant Functional Viewer window, without performing any actions on the SUT. Image File and Suite Information: These functions return various properties associated with images and the eggPlant Functional suite. Found-Image and Found-OCR Information Functions: These functions return additional information about the last image or text (OCR) search that was found in the Viewer window. eggPlant Functional Interface and Run Options Control: These commands and functions let you modify script run options and control the eggPlant Functional GUI. Script Calling: Use these commands to call other scripts from within scripts. SUT Information and Control: These commands and functions let you use SenseTalk scripts to connect to your SUTs from eggPlant Functional as well as gather information about connections and other SUT information. Results and Reporting: These commands and functions customize the results created by scripts. Mobile SUT Information: Use these commands and functions to gather information about your eggPlant Functional mobile SUTs. Mobile Control and Touch Events: These commands and functions let you perform actions against your mobile device SUTs. This topic was last updated on July 31, 2017, at 09:54:37 AM.
http://docs.testplant.com/ePF/SenseTalk/stk-gui-automation-commands-functions.htm
2018-03-17T05:57:17
CC-MAIN-2018-13
1521257644701.7
[]
docs.testplant.com
Test: Visual Studio Test Agent Deployment This task is deprecated in VSTS and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task together with phases to run unit and functional tests on the universal agent. For more details, see Testing with unified agents and phases. TFS 2017 and earlier Deploy and configure the test agent to run tests on a set of machines. The test agent deployed by this task can collect data or run distributed tests using the Visual Studio Test task. Demands and prerequisites This task requires the target computer to have: - Windows 7 Service Pack 1 or Windows 2008 R2 Service Pack 2 or higher - .NET 4.5 or higher - PSRemoting enabled by running the Enable-PSRemoting PowerShell script Windows Remote Management (WinRM) The task supports a maximum of 32 machines/agents. Supported scenarios Use this task for: - Running automated tests against on-premises standard environments - Running automated tests against existing Azure environments - Running automated tests against newly provisioned Azure environments The supported options for these scenarios are: - TFS - On-premises and VSTS - Build and release agents - Hosted and on-premises agents are supported. - The agent must be able to communicate with all test machines. If the test machines are on-premises behind a firewall, a VSTS hosted agent cannot be used because it will not be able to communicate with the test machines. - The agent must have Internet access to download test agents. If this is not the case, the test agent must be manually downloaded, uploaded to a network location accessible to the agent, and the Test Agent Location parameter used to specify the location. The user must manually check for new versions of the agent and update the test machines. - Continuous integration/continuous deployment workflows - Build/deploy/test tasks are supported in both Build and Release Management workflows. - Machine group configuration - Only Windows-based machines are supported inside a machine group for build/deploy/test tasks. Linux, macOS, or other platforms are not supported inside a machine group. - Installing any version of Visual Studio on any of the test machines is not supported. - Installing any older version of the test agent on any of the test machines is not supported. - Test machine topologies - Azure-based test machines are fully supported, both existing test machines and newly provisioned test machines. - Machines with the test agent installed must have network access to the TFS instance in use. Network-isolated test machines are not supported. - Domain-joined test machines are supported. - Workgroup-joined test machines must use HTTPS authentication configured during machine group creation. - Usage Error Conditions - Using the same test machines across different machine groups, and running builds (with any build/deploy/test tasks) in parallel against those machine groups is not supported. - Cancelling an in-progress build or release that contains any build/deploy/test tasks is not supported. If you do cancel, behavior of subsequent builds may be unpredictable. - Cancelling an ongoing test run queued through build/deploy/test tasks is not supported. - Configuring the test agent and running tests as a non-administrator, or by using a service account, is not supported. - Running tests for Universal Windows Platform apps is not supported. Use the Visual Studio Test task to run these tests. Example More information - Using the Visual Studio Agent Deployment task on machines not connected to the internet - Set up automated testing for your builds - Source code for this task Related tasks Q&A When would I use the Enable Data Collection Only option? An example would be in a client-server application model, where you deploy the test agent on the servers and use another task to deploy the test agent to test machines. This enables you to collect data from both server and client machines without triggering the execution of tests on the server machines. How do I create an Azure Resource Group for testing? See Using the Azure Portal to manage your Azure resources and Azure Resource Manager - Creating a Resource Group and a VNET. Report problems through the Developer Community Send suggestions on UserVoice
https://docs.microsoft.com/en-us/vsts/build-release/tasks/test/visual-studio-test-agent-deployment
2018-03-17T06:40:40
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
How it works What you need - A Brandfolder account with admin privileges. - A Mobile Locker administrator account. - The Mobile Locker Enterprise plan. Connect Brandfolder to Mobile Locker Go to your profile page in Brandfolder here, then click Integrations. Find your Brandfolder under API Keys and click the copy button to copy the API key to your clipboard. Open Mobile Locker in a new tab and go to the Brandfolder section of the Edit Team page. Paste the API key into the Brandfolder API Key field, then press ENTER. If your key is valid, several additional fields will appear. In the first dropdown, select the Brandfolder you want to connect with Mobile Locker. You may only have one Brandfolder. Select the Brandfolder Section that will synchronize to Mobile Locker. In this example, we created a section in Brandfolder named "Mobile Locker". Any files that are dragged into that section will be pushed to Mobile Locker. Select the Mobile Locker group that will be given access to the files that come from Brandfolder. Select the Share Theme that will be associated with these files. Scroll to the bottom and press Save. Updated 10 months ago
https://docs.mobilelocker.com/docs/brandfolder
2021-04-10T13:54:28
CC-MAIN-2021-17
1618038057142.4
[array(['https://files.readme.io/fbd028e-2020-06-03_13-36-05.png', '2020-06-03_13-36-05.png'], dtype=object) array(['https://files.readme.io/fbd028e-2020-06-03_13-36-05.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d2cf2c8-2020-06-03_13-50-58.png', '2020-06-03_13-50-58.png'], dtype=object) array(['https://files.readme.io/d2cf2c8-2020-06-03_13-50-58.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3da2987-2020-06-03_13-50-08.png', '2020-06-03_13-50-08.png'], dtype=object) array(['https://files.readme.io/3da2987-2020-06-03_13-50-08.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c01a649-2020-06-03_13-56-35.png', '2020-06-03_13-56-35.png'], dtype=object) array(['https://files.readme.io/c01a649-2020-06-03_13-56-35.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/a93d3cb-2020-06-03_13-57-44.png', '2020-06-03_13-57-44.png'], dtype=object) array(['https://files.readme.io/a93d3cb-2020-06-03_13-57-44.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/4fd1698-2020-06-03_13-59-02.png', '2020-06-03_13-59-02.png'], dtype=object) array(['https://files.readme.io/4fd1698-2020-06-03_13-59-02.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c84291e-2020-06-03_14-00-28.png', '2020-06-03_14-00-28.png'], dtype=object) array(['https://files.readme.io/c84291e-2020-06-03_14-00-28.png', 'Click to close...'], dtype=object) ]
docs.mobilelocker.com
Adds support for setting notification period for all types. Python 2.7 support has been dropped. Last release of python-monascaclient to support python 2.7 is OpenStack Train. The minimum version of Python now supported is Python 3.6. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/releasenotes/python-monascaclient/ussuri.html
2021-04-10T14:15:55
CC-MAIN-2021-17
1618038057142.4
[]
docs.openstack.org
VMware vRealize® Operations Cloud™ gives IT and cloud ops teams self-driving operations powered by artificial intelligence (AI) to optimize, plan and scale their private, hybrid and public clouds for consistent operations to drive agility and innovation. For more information, see product documentation and visit. Contents - What's New February 12, 2021 - What's New February 04, 2021 - What's New November 17, 2020 - What's New October 6, 2020 - What's New August 11, 2020 - What's New July 09, 2020 - What's New June 02, 2020 - What's New April 14, 2020 What's New February 12, 2021 vRealize Operations Cloud is now available in: - Canada In addition to the following regions: - Germany (Frankfurt) -. What's New February 04, 2021 For the complete list of new features and known issues in the February 04, 2021 update, see the vRealize Operations Manager release notes. Here are new features in the cloud offering: In-product Feedback: - Provide feedback using forms available within the vRealize Operations Cloud service portal. - Feedback forms are completely optional and you can unsubscribe at any time. Once unsubscribed, you won’t see those forms again, and you won’t be able to subscribe again. vRealize Operations Management Pack for Horizon: - Manage and monitor the health of your Horizon environment using vRealize Operations Management Pack for Horizon, which is now available in the in-product Marketplace. - vRealize Operation Cloud billing for Horizon Management Pack - 1 OSI = 4 VDI VMs - 1 OSI = 4 RDS Hosts - 1 OSI = 1 Connection sever vRealize Operations Billing Usage Dashboard: - The vRealize Operations Billing Usage dashboard is now available. It displays a list of objects that are being metered towards the vRealize Operations Cloud license. If you want to monitor new object types (from installing management packs), you can create a new dashboard by doing the following: - Clone the dashboard - Edit the Object List widget - Add object types by choosing Output Filter > Advanced > Object Types - Note that nested ESXi Hosts are excluded from billing. For example, vSAN Witness Hosts Known Limitation - Upgrade check for Cloud Proxy is disabled. For workaround, see KB 82471. - vRealize Operations Cloud supports vRealize Orchestrator Management Pack where the vCenter and vRealize Orchestrator are still on-prem, but the workflows have certain limitations. vRealize Orchestrator Management Pack Out-Of-the-Box workflow Apply Host Security Configuration Rules and the dependent workflows such as Configure Host Security Configuration Data are not supported. For workaround, see KB 82804. Known Issue - You cannot raise a support request from the vRealize Operations Cloud support panel, because the link does not open. Workaround: Open a support request from the VMware Cloud services support panel, or contact GSS via the chat feature in the vRealize Operations Cloud support panel. For more information, see the How Do I Get Support topic. What's New November 17, 2020 vRealize Operations Cloud is now available in: - Germany (Frankfurt) In addition to the following regions: -. Note: vRealize AI Cloud is not yet available in the new regions. What's New October 6, 2020 For the complete list of new features and known issues in the October 6, 2020 update, see the vRealize Operations Manager release notes. Here are new features in the cloud offering: - vRealize Operations Cloud Specific: - Enhanced service availability providing 99.9% SLA, maintenance period for upgrades reduced to near-zero downtime. - Simplified migration of custom content from vRealize Operations on-prem to vRealize Operations Cloud. Link to documentation: Managing Content. - Improved user synchronization with VMware Cloud Services Platform, allowing users to import enterprise and customer user groups into vRealize Operations Cloud. - VMware hosted email support for alert notification and reports. Link to documentation: VMware Hosted Email plug-in for Outbound Settings. - Support for sending SNMP Trap notifications. Link to documentation: Add an SNMP Trap Plug-In for vRealize Operations Cloud Outbound Alerts. - In-product marketplace for vRealize Operations Cloud extensibility using Management Pack for easy access and installation. Link to documentation: Marketplace for Solutions. - vRealize Operations Cloud Proxy offers CLI commands and can now be monitored as an object from vRealize Operations Cloud. Link to documentation: Using the Cloud Proxy Command-Line Interface on Cloud. - Localization of in-product support panel. View help text and documentation link in your own preferred language. - Exclusive capabilities in vRealize Operations Cloud: - Near real-time monitoring - 20 seconds - for vSphere Clouds for better observability. Link to documentation: Configure a vCenter Server Cloud Account in vRealize Operations Cloud. - Application Performance Monitoring tool integration providing support for AppDynamics, Datadog, Dynatrace, and New Relic. Link to documentation: Application Integration. - Enhanced vRealize Log Insight Cloud Integration including log-metric correlation and enhanced dashboards. - Enhanced Skyline Integration expanding proactive issue avoidance, troubleshooting and integrated workflows with capabilities such as launching Troubleshooting Workbench from Skyline Findings and in-context launch of Objects Details from Objects in Skyline. Link to documentation: VMware vRealize Operations Management Pack for VMware Skyline. - Exclusive with vRealize Cloud Universal: - Choice of Advanced or Enterprise edition of vRealize Operations Cloud. Details here: vRealize Cloud Universal. - vRealize Cloud Federated Analytics for managing multiple vRealize Operations Cloud and vRealize Operations OnPrem instances. Link to documentation: vRealize Cloud Federation Adapter. - vRealize AI Cloud for self-tuning and continuous optimization of vSAN clusters. Link to documentation: Configuring and Using vRealize AI Cloud. Known Limitations - Cloud Proxy - With the latest update of vRealize Operations Cloud, the Cloud Proxy needs to be updated to the latest as well. If there is an issue with the upgrade that results in the Cloud Proxy not being updated automatically, contact VMware Support. - You can check the version of the Cloud Proxy by navigating to the Administration -> Management -> Cloud Proxies page. You should have version 8.2.0.16992050 or higher. - Cloud Federation Adapter - Manually stop and start the adapter instances to get the latest changes and start the collection of the newly added metrics. - Due to the changes in the Metrics Configuration file, after upgrading the Cloud Federation Adapter, manually stop and start the Cloud Federation Adapter instances to get the latest changes and start the collection of the newly added metrics. - Integration of vRealize Operations Cloud and vRealize Log Insight Cloud currently not supported for VMware Cloud on AWS (VMC). Known Issues - Attempting to activate or deactivate an Application Monitoring agent in a Windows endpoint fails after a content upgrade, if the Windows endpoint Application Monitoring agent has already been bootstrapped in an earlier version. In the Manage Agents tab, plugins fail to activate/deactivate in vRealize Operations Cloud Workaround: Contact VMware Technical Support What's New August 11, 2020 vRealize Operations Cloud is available in the following two AWS regions: - US West (Oregon) - Asia Pacific (Sydney) You have the option to request for the vRealize Operations Cloud service in one of the two supported regions, to comply with your organization's data sovereignty and data separation requirements. This is a one-time choice that you can make before starting to use vRealize Operations Cloud. What's New July 09, 2020 - A chat feature is available in the support panel. You can contact VMware support engineers and customer support representatives using the chat feature. Resolved Issues - vRealize Automation Cloud Integration is now available. For more information, see vRealize Automation 8.X in the VMware vRealize Operations Cloud Configuration Guide. - For an additional list of resolved issues, see the vRealize Operations Manager release notes. Known Issues - After you uninstall a telegraf agent, the Operation Status displays, "Uninstall in Progress" instead of "Uninstall Success" in the Administration > Manage Agents screen. As a workaround, uninstall the agent once again. For more information, see Uninstall an Agent in the VMware vRealize Operations Cloud Configuration Guide. What's New June 02, 2020 Resolved Issues - Cloud Proxy is affected by Authentication Bypass and Directory Traversal vulnerabilities. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifiers CVE-2020-11651 (Authentication Bypass) and CVE-2020-11652 (Directory Traversal) to these issues. This is now addressed. - An issue with Service Discovery, which was handling data collection status for deleted and powered-off VMs incorrectly, is resolved. Learn about Service Discovery by clicking this link. What's New April 14, 2020 For the complete list of new features and known issues in the April 14 2020 release, see the vRealize Operations Manager release notes. Here are new features in the cloud offering: vRealize Operations Cloud Infrastructure - Connect and manage on-premise end point such as vCenter through locally installed Cloud Proxies. - Install Cloud Proxy into VMware Cloud if vCenter does not have a public IP address. - Cloud Proxy supports Network proxy (HTTP/HTTPS proxy) to connect to vRealize Operations Cloud. - Cloud Proxy instances can be combined into Collector Groups for high availability. - Automatic upgrade of Cloud Proxy instances when vRealize Operations Cloud is updated. - Cloud Proxy to manage the application monitoring agents: - Manage the life cycle of application monitoring telegraf agents directly from Administration → Inventory → Manage Agents. - Start managing applications and configuring agents via Home → Manage Applications → Monitor Applications. - Ability to connect to on-premise servers, through Cloud Proxy, for the following outbound plugins: - Standard Email plugin - Rest notification plugin Cloud Management Platform - Integration of vRealize Operations Cloud with vRealize Log Insight Cloud - Integration is automatically configured when both these services are running in a single Cloud Services organization. - View logs in context of the selected object in vRealize Operations Cloud. - Launch into the full capabilities of vRealize Log Insight Cloud at the click of a button. Known Limitations - You cannot install any non-native Management Pack. Contact VMware to request installation of non-native management packs. - Dashboard sharing using URL, Email and Embed options is not available. - Custom compliance benchmark is not supported. - OPSCLI is not supported. Use REST APIs instead. - Following outbound plugins are not supported: - SNMP Trap Plugin - Log File Plugin - Smarts SAM Notification Plugin - Network Share Plugin - You cannot change the collector for a vCenter adapter. As a workaround, delete the vCenter adapter using suite-api. - You must specify a working NTP server when you deploy the Cloud Proxy OVF. Alternatively, ensure that the ESXi on which the Cloud Proxy runs has an NTP server specified. - Day2 Workload Optimization for vRealize Automation Cloud managed VMs is not fully functional in vRealize Operations Cloud. The VMs managed by vRealize Automation Cloud will not be relocated when executing the Optimize Now action from the Workload Optimization plan generated by vRealize Operations Cloud. The status in the UI will show failure, but this does not have any negative impact. Workload Optimization for other VMs that are not managed by vRealize Automation Cloud is not impacted. Known Issues - vRealize Automation Cloud Integration is unavailable. - Organization member role doesn't work with VMC cloud account configuration. The workaround is to use the organization owner role to generate the API token for VMC cloud account credentials. - If you have more than one VMware Cloud services org, and you receive a resource alert notification, clicking the notification takes you the default org and not the org from which the notification originated. After you click the resource alert notification, you see a blank page because the notification takes you to the default org. - The API endpoint responds with HTTP 400 if you add a trailing slash to the URL (). Do not use a trailing slash when you test the connection. - The links displayed in the widgets of some AWS dashboards fail, because of incorrect URL mapping. As a workaround, add the correct prefix to the hard coded redirect path ui/index.action#/dashboards/dashboard/1803b3c9-8556-4469-b099-6a31eb789a3e. The prefix depends on the organization which hosts your instance of vRealize Operations Cloud. The following widgets are affected: - AWS-Inventory Dashboard - Compute widget with hyperlink for EC2 resource - Storage Dashboard - EBS Volumes widget with hyper link on region(s) - Compute:EC2 Dashboard - EC2 widget where hyperlink is on regions
https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/rn/vRealize-Operations-Manager-Cloud.html
2021-04-10T14:49:21
CC-MAIN-2021-17
1618038057142.4
[]
docs.vmware.com
Server logs can provide useful insights when troubleshooting a vSphere Bitfusion server. To investigate any possible issues with vSphere Bitfusion, you can view the activity log of a specific vSphere Bitfusion server. For example, you can check the logs for thumbprint problems or vCenter Server GUID problems, that have occurred during the vSphere Bitfusion Plug-in registration process. Procedure - In the vSphere Client, select . - On the Servers tab, select a server from the list. - From the Actions drop-down menu, select Logs.
https://docs.vmware.com/en/VMware-vSphere-Bitfusion/2.5/User-Guide/GUID-446089A8-D223-4BA5-878E-C77CB715DE67.html
2021-04-10T13:58:40
CC-MAIN-2021-17
1618038057142.4
[]
docs.vmware.com
Tasks help you document and track the progress of assignments and projects in Address Manager On the Tasks page, you can add, edit, and delete tasks. To view your tasks and get to the Tasks page, you need to have the Tasks widget open on your My IPAM page. For information about working with My IPAM tabs and widgets, refer to My IPAM overview. To add a task: - Select the My IPAM tab. In the Tasks widget, click More... - Under Tasks, click New. - Define or edit the task in these fields: - Description—type a name for the task. The description appears in the Tasks section of the My IPAM page. This field is required. - Priority—select a priority level for the task. The priority level indicates the importance of the task. - State—select a state for the task. The state describes the progress and status of the tasks. - Percent Completed—type a number to indicate how much of the task is complete. - Start Date and Due Date—type a date in the format DD MMM YYYY or click the calendar button to select a date. - Click Add to add the task and return to the Tasks page, or click Add Next to add another task.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Adding-tasks/8.3.1
2021-04-10T14:58:39
CC-MAIN-2021-17
1618038057142.4
[]
docs.bluecatnetworks.com
Deployment roles can be set at IP blocks and networks, DNS views and zones, DHCP match classes, MAC pools, and TFTP groups. You can view all roles assigned to a server, and the objects to which the roles are assigned from the Associated Roles tab. The Associated Roles tabs of each server’s information page displays the DNS, DHCP, and TFTP deployment roles associated with the server. From here you can view the deployment role details and navigate to the object to which the view is assigned. To view the deployment roles assigned to Associated Roles tab. - To view deployment role details, click the name of a deployment role in the Service column. - To navigate to the object with the assigned role, click the name of an object in the Object column.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Viewing-Deployment-Roles/9.0.0
2021-04-10T14:18:26
CC-MAIN-2021-17
1618038057142.4
[]
docs.bluecatnetworks.com
Note: The name “SmartConnector” will be changing to “Integration App” to more clearly establish that our pre-built Integration Apps are built on our flagship Integration Platform as a Service, integrator.io. Find out more about integrator.io. As a sales representative or account manager, this feature provides you with an option to edit an existing active subscription and create a renew change order. A renew change order is created to renew a Subscription once the Subscription Term ends. This feature allows you to renew a subscription along with the ability to upsell or cross-sell by changing subscription plan, including more products in the renewal or even changing the pricing for each of the product included for renewal. A subscription can be renewed, by creating a renew change order, when it meets the following conditions: - Subscription, for which the renewal subscription needs to be created, is in the Active state. A renew change order can only be created for an Active Subscription. - Subscription has at least one Price Book Line that is of type Usage or Recurring. - An active change order exists in the subscription on the date when renew subscription is being created. - At least one subscription line must have the “Include in Renewal Subscription” field marked as “True”. Once all the aforementioned conditions are met, you can click the Renew button on a subscription to create a renew change order. Once a renew change order is created for an existing subscription, the SuiteBilling connector (via Salesforce Subscription Change Order to NetSuite Subscription Change Order flow) allows you to sync this renew change order into NetSuite. In NetSuite, a new Subscription and Sales Order record is created using this renew change order. The new subscription in NetSuite is then synced back to Salesforce as a renewal subscription. The sales order created in NetSuite for a customer is synced to Salesforce as a renewal opportunity using the NetSuite Sales Order to Salesforce Opportunity flow. The overall start and end dates of the sales order define the subscription term, and items on the sales order generate subscription line items on the renewal subscription. Renew Change Order Data flow Types of Renewal Subscription When you click the Renew button on an existing active subscription to create a renew change order, the Primary Information page is displayed, which can be used to create new or extend the existing subscriptions. You can use the Renewal Method drop-down list to create one of the following types of renewal subscription: Create New Subscription Use this Renewal Method to create: - A renewal Subscription with a new Price Book and Subscription Plan. - A renewal Sales Order in NetSuite. - A renewal Salesforce Opportunity corresponding to the renewal Sales Order in NetSuite. Select this option and define the values in the Renewal term, Renewal Plan, and Renewal Price Book field as per your requirement. Assign a date in the RIn the Renewal Transaction Type field, select the Sales Order option to create a sales order in NetSuite with a new subscription. Click Apply for each subscription line you want to include in the new renewal subscription. Click the value in the Price Plan field to access the Price Plan window for reviewing and editing the price plan associated with the subscription line and click Save. Note: Creating Estimate and Opportunity using the Renewal Transaction Type field is not supported in this release. The Subscription Change Order page is displayed. Click Sync to NetSuite to sync the renew change order to NetSuite. To open the synced renew change order in NetSuite, go to subscription record and click the Subscription Change Order subtab and click View. Once the renew change order is synced to NetSuite, this change order is used for automatically creating a new Subscription and Sales Order in NetSuite. A new renewal subscription is created with a suffix “Renew 1”. For example, if the subscription name you are renewing is ACME LTD then a renewed subscription will get created with a name, ACME LTD Renew 1. To find the associated subscription and sales order, from the Subscription Change Order- Renew page click the Renewals Details subtab. The new subscription is then synced back to Salesforce as a renewal subscription with the same name: The sales order from NetSuite is synced to Salesforce as Opportunity. This renewal opportunity will also display the associated subscription in the Subscriptions panel Note: For a renewal opportunity to be successfully created in Salesforce from the sales order in NetSuite, the subscription that is added to the sales order needs to be in the draft state. This means at least one subscription line in the subscription has to be in the Included state. The following tables display the status of Subscription line in existing Subscription and its corresponding status in the renew change order: Subscription Line Type: Usage and Recurring Subscription Line Type: One-Time Extend Existing Subscription As the aim is to extend the subscription along with all its associated components, you must only select the Renewal Method as Extend Existing Subscription. Define the values in the Renewal term, R Once you click Done, the Subscription Change Order page is displayed. Click Sync to NetSuite to sync the renew change order to NetSuite. The following tables display the status of Subscription line in existing Subscription and its corresponding status in the renew change order: Subscription Line Type: Usage and Recurring Subscription Line Type: One-Time Note: - Once created, a Renew Change Order cannot be deleted or edited. - The Renewal Transaction Type drop-down list in the Change order page only supports Sales Order. The remaining transaction types are not supported in this release. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360011966352-Renew-Change-Order
2021-04-10T15:08:41
CC-MAIN-2021-17
1618038057142.4
[array(['/hc/article_attachments/360010079491/12.jpg', '12.jpg'], dtype=object) array(['/hc/article_attachments/360008582692/1.jpg', '1.jpg'], dtype=object) array(['/hc/article_attachments/360010079971/14.jpg', '14.jpg'], dtype=object) array(['/hc/article_attachments/360010060592/16.jpg', '16.jpg'], dtype=object) array(['/hc/article_attachments/360010081551/15.jpg', '15.jpg'], dtype=object) array(['/hc/article_attachments/360008582792/3.jpg', '3.jpg'], dtype=object) array(['/hc/article_attachments/360008582832/4.jpg', '4.jpg'], dtype=object) array(['/hc/article_attachments/360008618371/5.jpg', '5.jpg'], dtype=object) array(['/hc/article_attachments/360008618391/6.jpg', '6.jpg'], dtype=object) array(['/hc/article_attachments/360010082831/18.jpg', '18.jpg'], dtype=object) ]
docs.celigo.com
ProductDyno WP Plugin If you have content on a WordPress site that you sell and want to deliver securely by ProductDyno then we have you covered! The ProductDyno WP Plugin is listed at WordPress.org If your content is already set up on a WordPress site, now you can secure and deliver your content in Wordpress! You do, however, need to set up your product as usual in ProductDyno. Please follow these steps: - 1 Install the ProductDyno WP Plugin: Please note: your content must not be already protected by another membership plugin - 2 Add your product to ProductDyno - using the wizard, choosing to host the content on your own site but NOT choosing EXTERNAL CONTENT. Add your product to a collection if you choose. DO NOT add content - you already have that on your WordPress site! - 3 Secure each content page on your WordPress site as shown for a product or a collection Product Collection - 4 - Customize the Welcome and Reset Password emails and login screen. Please see below for the links you should use to replace the standard "short code" links in the Welcome and Reset Password emails. - Inside the ProductDyno WP Plugin Control Panel Connected: if you haven't connected the plugin to your ProductDyno account, this icon will show an open padlock. If it's closed, you know you have successfully connected the plugin to your ProductDyno account. Help & Support: Links to the ProductDyno Knowledge base and support email address. Clear Cache Data: To increase the performance, we cache the data from ProductDyno. If you update something inside ProductDyno and don't see the change reflected inside the plugin…use this button to clear the cache and get new data from ProductDyno. It currently applies to the Product/Collection drop-down and Login page settings. Links On your pages and in your emails, etc you need to use the following links for public access, registration, login, etc. Notes: - Copy the URL of the FIRST page your members will see when accessing the product - use this URL to replace each occurance of <FIRSTPAGE> below - DO NOT use the ones shown in ProductDyno -> Product/Collection -> Domain & Access
https://docs.promotelabs.com/article/1152-productdyno-wp-plugin
2021-04-10T15:07:20
CC-MAIN-2021-17
1618038057142.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5ce5313d0428634b85598e71/file-h9kya8JloH.png', None], dtype=object) ]
docs.promotelabs.com
Please login or sign up. You may also need to provide your support ID if you have not already done so. Microsoft Lync Client (formerly known as Microsoft Office Communicator) is a unified communications application that enables end users to communicate using a range of different communication options, including instant messaging (IM), voice, desktop sharing and video This page concerns the client only. The server component is modelled separately
https://docs.bmc.com/docs/display/Configipedia/Microsoft+Lync+Client
2021-04-10T15:23:11
CC-MAIN-2021-17
1618038057142.4
[]
docs.bmc.com
Crate tet_io Version 2.0.2 See all tet_io's items I/O host interface for tetcore runtime. Tetcore tetcore-tracing for more information. tracing tetcore-tracing Crossing is a helper wrapping any Encode-Decodeable type for transferring over the wasm barrier. Batch verification extension to register/retrieve from the externalities. Error verifying ECDSA signature Initialize tracing of tetcore_tracing not necessary – noop. To enable build without std and with the with-tracing-feature. with-tracing Type alias for Externalities implementation used in tests. The host functions Tetcore provides for the Wasm runtime environment.
https://docs.rs/tet-io/2.0.2/tet_io/
2021-04-10T14:49:02
CC-MAIN-2021-17
1618038057142.4
[]
docs.rs
Modeling Products In commercetools, a product is a sellable good or service. A product can be material or immaterial. Some examples of products include: - A pair of shoes - A mobile phone plan - An ebook - A license - A file On data level, Products are based on a Product Type selected when created. They can only define values for the Product Attributes a Product Type has defined. Product information is stored in two dimensions: Product Attributes and Product Variants. Product Attributes describe individual properties of a product, and Product Variants describe individually sellable variations of a particular product. The key consideration when modeling products is deciding what to model as Product Attributes and what to model as Product Variants. Often sellable goods of a business are almost the same in most aspects, but differ in just one or some Product Attributes. These can then be handled as Product Variants and be grouped in one parent product. Products: keep in mind Keep the following in mind when thinking about what products to create: - Remember the commercetools’ platforms limits: if a Product has too many Product Variants, it might approach the JSON document size limit. In general, it’s better to have more Products which have less Product Variants. Products: helpful questions When modeling products, ask yourself the following questions: - What characteristics of the product need to be captured? - What should be a Product Variant or a Product? Try it out a few different ways before committing to anything. Product variants Product Variants generally represent a distinct SKU or sellable good. In some cases, like Products which have various sizes, they represent a group of sellable goods which are identical except for one or two Product Attributes. These attributes which differ are generally not usable as sorting or ordering selectors (site navigation), and therefore aren't modeled as categories, but are more useful as filters for search results. For example, a clothing store might have a product, "Women's Pants" which has the following variants: - Red Women's Pants in size 36 - Green Women's Pants in size 38 - Green Women's Pants in size 40 When to use variants Using variants is a key decision which must be made before importing your product data into commercetools. Product variants are a powerful way to model catalog data, but can lead to issues with data size. For example, if you model a Product Variant for each size of pant and each color available, this often results in a large data set. When deciding to use product variants, try multiplying the variable attributes with each other. This gives a rough idea of how many variants a product might have. For example, 5 colors, 10 sizes and 4 fabrics make a selection of 200 possible combinations. You can model this as a single Product which has 200 Product Variants, but we'd recommend modeling this as multiple products instead, taking one of the variable attributes (color or fabric, for example) and using that as a Product. Best practices: products, variants and attributes Let’s revisit the diagram of the Product Data Model: As we mentioned previously, Products contain Product Variants, and they all contain Product Attributes. From a data complexity standpoint, this means the data for a single Product can increase exponentially very easy, and reach the JSON document size limit. In particular, creating Products which have too many Product Variants can cause performance decreases. When modeling products, be mindful of the amount of data which will be generated for any one Product. It’s important to be strategic what you choose to model as a Product Variant and what the purpose of a Product Attribute is. In general, we recommend creating more Products with less Variants. In general, approach modeling your Products with the following workflow: - Analyze the current product set’s product attributes. Pay attention to which attributes are common to all (or a group of) products, and which are not. - Decide on your major product groups, and what their common attributes are. These will become your product types. - Decide on the granularity of a Product and a Product Variant. In general, more Products with less Variants is recommended. - Try out a few different options: Don’t commit to a product model before seeing if it will work in some typical and not-so-typical scenarios! Keep in mind the following when modeling products: - product URL slugs, display names, and search keywords are modeled on products, not variants: Keep this in mind when optimizing your product modeling for Search Engine Optimization (SEO) purposes. - product names are weighted 6x more heavily than other attributes in full text searches: When naming products, select a name carefully that doesn't use terms which only apply to one variant. - Tax categories are modeled on the product level: when deciding on what variables to model as product variants, keep in mind that all variants must share the same tax category.
https://docs.commercetools.com/tutorials/product-modeling/products
2021-04-10T15:05:15
CC-MAIN-2021-17
1618038057142.4
[]
docs.commercetools.com
Point Group Point groups are collections of associated points that are treated as a set. This is important because you can apply operations to them as a whole - saving you from having to apply discrete operations to each element. Furthermore, point groups can be used to filter-out points not needed for a specific operation; rather than affecting an entire input, grouping restricts the scope of an operation to the points in a group. Creating Point Groups[edit] To group points in Touch use the Group SOP, any SOP with a point group input field, or the Select state in a SOP Editor. To group points in the SOP Editor: - Use the Select state, and in the sub-icons, choose the Point Groups icon. - Select the desired points with the cursor. - Call up the Parameters dialog by clicking the + button beside the sub-icons. Click on the Combine Groups page-tab. - Type a new group name in the edit field. - Click on the group <- selection button. Ordered and Unordered Groups[edit] A point group can be ordered or unordered. In the SOP Editor's Select state, a single click of the mouse button performs an ordered selection. Bulk selections are made by dragging the cursor across the points. This action creates a marquee box that encloses a number of points. Points selected in this fashion generate an unordered group. The only time bulk selections generate or maintain an ordered selection is when only one point is caught in the marquee box. Unordered groups store their points in creation order; ordered groups store points in selection order. If you want to reselect the points in the group, you can do so by calling up the Parameters dialog from the Select state, and selecting the group name from the Group pop-up menu under the Combine Groups page-tab. Then click on the button selection <- group. When a point is deleted, Touch automatically removes the point from all the point groups it might belong to. See also: Primitive Groups, Geometry Detail, Point, Point List, Point Class, Primitive, Prims Class, Polygon, Vertex, SOP, SOP Class, Script SOP, Attributes.
https://docs.derivative.ca/Point_Group
2021-04-10T14:51:22
CC-MAIN-2021-17
1618038057142.4
[]
docs.derivative.ca
Promote Labs Login URLs If you have misplaced the email with your login details, you can request a new password by heading to the login page for your product and using the Forgot Password Function with your payment email address. You will receive an email with new credentials. Be sure to bookmark the login URL and save your credentials - in your browser or your favorite password manager. For the following apps, please head to to get the login URLs If your product isn't mentioned above - then take a look here:
https://docs.promotelabs.com/article/1272-promote-labs-login-urls
2021-04-10T14:17:59
CC-MAIN-2021-17
1618038057142.4
[]
docs.promotelabs.com
When we are ready with our test suite we can run the tests. Click on Run.. item in the main menu (or press F6). You get Run Tests modal window Learn more about "interactive mode" option in Interactive mode Learn more about "update comparison images" option in CSS Regression Testing The modal window allows you to select a target environment: We can use Browser options tab to adjust the runner On this panel we can select a browser to run the tests: Available options are: Headless Chromium allows running Chromium in a headless/server environment. In this mode the test will run considerably faster. Chromium is an open-source web browser, which is used as basis for Google Chrome browser. Puppetry downloads and uses a specific version of Chromium so its API is guaranteed to work out of the box. Google Chrome is a cross-platform web browser developed by Google. Mozilla Firefox is a free and open-source web browser developed by the Mozilla Foundation. Connect to Chrome - connecting to a running instance of Chrome Browser-specific options: DevTools - enables Chrome DevTools in the browser incognito window - runs the tests in private session maximized - maximizes the browser window fullscreen - switches the browser in fullscreen mode ignore HTTPs errors - tolerates HTTPs errors like invalid certificate Besides you can manually provide any Chromium command line options in the textbox below. You can also specify location of Chrome extension. Learn more here There are some cases when we need to connect to a ruining instance of chrome instead of starting a new one. For example to bypass reCaptcha we can solve it manually in Chrome and then run the tests on it. In order to connect we need to start Chrome in command-line with remote-debugging-port parameter. start chrome.exe –remote-debugging-port=9222 --user-data-dir=remote-profile /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9222 --no-first-run --no-default-browser-check --user-data-dir=$(mktemp -d -t 'chrome-remote_data_dir') google-chrome --remote-debugging-port=9222 Next we navigate in the started browser to. On the page we can see a JSON object. We shell copy the value of webSocketDebuggerUrl property. Open Run Tests modal window (F6), switch to Browser options tab and paste the saved value into WS Endpoint input. Now we can press Run. As we press Run button the tests are sent to Jest and we have to wait for Test Report
https://docs.puppetry.app/v/3.0.0/running-tests
2021-04-10T14:10:41
CC-MAIN-2021-17
1618038057142.4
[]
docs.puppetry.app
scrontab scrontab can be used to define a number of recurring batch jobs to run on the cluster at a scheduled interval. Much like its namesake, crontab, the scrontab command maintains entries in a file that are executed at specified times or intervals. Simply type scrontab from any cluster node and add your job entries in the editor window that appears, one per line. Then save and exit the editor the same way you would exit vim[1]. Entries use the same format as cron. For an explanation on crontab entry formats, see the wikipedia page for cron. This example scrontab entry runs a jobscript in user jharvard’s home directory called runscript.sh at 23:45 (11:45 PM) every Saturday: 45 23 * * 6 /n/home01/jharvard/runscript.sh Note: Your home directory may be located in a different location. To find the directory you are in, type pwd at the command line. For more information on scrontab, type man scrontab on any cluster node to see the scrontab manual page. [1] For those who prefer a different editor, you can precede the scrontab command with the EDITOR variable. For example, if you want to use nano, you could invoke scrontab like so: EDITOR=nano scrontab
https://docs.rc.fas.harvard.edu/kb/scrontab/
2021-04-10T14:01:06
CC-MAIN-2021-17
1618038057142.4
[]
docs.rc.fas.harvard.edu
First you need to download the latest release from our website then, set executable permission to the .AppImage file: chmod +x Bottles-*-x86_64.AppImage Move the file to a safe path where there is no risk of deleting them and double click on it, or launch via the terminal by typing: ./Bottles-*-x86_64.AppImage At the first launch, Bottles asks you if you want to install the AppImage, this will make it available in the applications menu of the Desktop Environment in use: You can find bottles on the official Flathub page Ensure the flathub repository is active: You can install bottles with a simple command: flatpak install flathub com.usebottles.bottles After this you can find the bottles icon in your menu or in your applications. flatpak remote-add --if-not-exists flathub Then install Bottles with: flatpak install com.usebottles.bottles Or, if your package manager supports flathub links, click here. We need the following dependencies: org.gnome.Sdk org.gnome.Sdk.Compat.i386 org.freedesktop.Sdk.Extension.toolchain-i386 Download the latest bottles source from GitHub: wgen -O bottles-source.zip bottles-source.zipcd bottles-source Build can be performed using flatpak-builder (installable using your distribution package manager like apt, dnf, ..): flatpak-builder --repo=bottles --force-clean --user build-dir com.usebottles.bottles.ymlflatpak remote-add --user bottles bottles --no-gpg-verifyflatpak install --user bottles com.usebottles.bottles Then run using flatpak command: flatpak run com.usebottles.bottles Our community provides non-official packages for installing Bottles on some distributions. There are two packages from AUR for Arch Linux: bottles-git (which offers the latest version from our git repository) To install one of these packages use an AUR helper like yay or paru: yay -S bottles-git or manually using makepkg: git clone bottles-gitmakepkg -si The latest rpm package ca be found in this repository and can be installed double clicking on it (in most distributions) or by terminal using the rpm command: $ rpm -i package_name.rpm You can also build Bottles from source. Requirements: meson ninja python3 glib glib2-devel on Fedora libglib2.0-dev on Debian/Ubuntu Clone Bottles from GitHub: git clone Bottles and build using meson and install using ninja: mkdir buildmeson build && cd buildninja -j$(nproc)$ ninja install
https://docs.usebottles.com/getting-started/installation
2021-04-10T14:28:32
CC-MAIN-2021-17
1618038057142.4
[]
docs.usebottles.com
iOS Device Management and ACP Tools This article provides context around Mobile Device Management(MDM) services and teaches you the best practices for deploying the ACP Tools App to iOS devices in your organization. Device Management Why should I use device management? Device management providers can help streamline a few following areas: - Configuring and Updating iOS Devices quickly and easily - Enforcing Security Policies - Locating, Deactivating, and Erasing Devices - Naming Devices Consistently (e.g. Nursing Home iPad 1, Nursing Home iPad 2) - Assigning Devices to Users within Your Organization - Automatically Installing Third Party Apps, Books, and other Content What are the requirements for automatically setting up devices remotely? If you want to setup devices remotely and automatically (instead of plugging in devices), you'll need three things: - Purchase your iOS devices from Apple or an Authorized Reseller - Enroll in Apple's Device Enrollment Program (DEP) - Provide configuration profile(s) using a Mobile Device Management(MDM) service to control and setup your devices How does device management work? Device management installs a small file called a Configuration Profile on a device. This file is a bundle of settings that the device will adhere to. A device can have one or many different installed configuration profiles depending on management needs. Examples might include: - Having devices automatically join a password protected Wi-Fi network - Preventing users from adding or removing specific apps - Hiding unused apps on a device - Disabling the camera In this example, each of these configuration options could be combined as a single configuration profile or 4 distinct profiles. For example, settings specific to an advance care planning project could be added in addition to organizational security policies. Configuration Profiles can be installed locally by connecting to a computer, or remotely by enrolling a device in a Mobile Device Management (MDM) Service. How would I setup a device management service? Wouldn't it be great if each device you handed to your staff was setup automatically? What if devices you ordered from Apple or an Apple Authorized reseller was configured out of the box? The best way to roll out device management is by partnering with your organization's technology team. The initial setup requires some technical knowledge, but once configured, the process is a seamless and wonderful experience for your team. Please familiarize yourself with Apple's Device Enrollment program and guide found here:. How do I make sure that every device is setup correctly? Ensuring that you can effectively manage the devices for your ACP project is essential. There are two main workflows for managing devices in an organization: - Apple Configurator 2 — This is a free application provided by Apple—this makes it easy to deploy iPad, iPhone, iPod touch, and Apple TV devices in your school or business. To learn more about this process, please visit Apple’s Business & Education Support for guidance on device management best practices. It requires devices to be connected to a computer and is an attractive option if your organization does not have a Mobile Device Management (MDM) service available. - Mobile Device Management (MDM) Service — This is a remote service that sends commands to enrolled devices depending on your management needs. The options listed above are not mutually exclusive and can be leveraged together. You can think of the MDM service as high-level master controller for all devices and Apple Configurator For example, a school could manage security and network settings through an MDM service, but a cart of iPads would be configured on a per classroom basis. Additionally, a hospital could manage security and network settings through an MDM service, and configure iPads on a per clinical team basis preload different content (e.g. cardiology vs. pediatric focused apps) based on clinical settings. How do I make sure the ACP Tools App is installed? Using Apple Configurator 2 and/or an MDM service allows you to specify which Apps you would like installed on enrolled devices. How does the ACP Tools App fit into a Bring Your Own Device (BYOD) strategy? In some organizations, users may be allowed to bring their own phone, tablet, or computer, as long as it meets the security and management policies of the organization. This is commonly referred to as a "Bring Your Own Device" or BYOD strategy. As long as a device is enrolled in an MDM service, it can adhere to your policies, regardless of who purchased it originally. How do I lock a device down with an MDM service? Device management services on iOS 9 support a feature called blueprints. These are an easy way to lock features down and make sure apps are always installed. If a blueprint is applied to a device, it will honor the rules you've specified. The procedure to create a blueprint will vary depending on your device management service. We use a service called Bushel, which out of the box supports a lot of restrictions found here. In this example, we want ACP Tools to be automatically installed and the camera to be disabled: - Login to Bushel - Select the Blueprints Tab - Select the + button to create a new blueprint and save it - Choose the devices you want the blueprint to apply to - Select the Apps tab and choose ACP Tools - Select the Restrictions tab and choose Disable Camera - Click the "Sync" button to apply these changes to your specified devices What is the best way to keep track of devices? Device management allows an organization to assign devices to individuals. Using an MDM service allows an organization to remotely locate and erase devices that go missing. Recommended Device Restrictions MDM services allow you to lock down many aspects of a device. Bushel provides a comprehensive list of available restrictions here which are standard for iOS 9 devices and above. Depending on your MDM service, these restrictions may vary. Is there a common approach you recommend? Your requirements will vary depending on your internal policies, so it is important that you coordinate with your technology and security teams. Many aspects of devices can put PHI at risk and should be turned off, these include the camera, messaging, and backups, syncing and other sharing services. If you want the highest level of control over the device, you must follow these steps: - Make your device's Supervised using Apple Configurator 2 - Enroll your devices in your MDM service (such as Bushel) - Select the restrictions you would like and save a Blueprint - Apply the blueprint to the enrolled devices What restrictions do most customers enable or disable? For all devices, we typically see customers do the following: - Require WiFi to join only Authorized Networks - Disable Camera, Siri, iCloud Backups, Screenshots, iCloud Keychain - Pre-Install Important Third Party Apps - If Siri is enabled, Force Profanity Filter on Siri When a device is Supervised you can restrict additional settings: Disable installing Apps Prevents your staff from installing Apps from the App store. Example: A user could not download a note taking app that might be able to capture patient data. Disable Air Drop This is a quick way to transfer files from an iOS device to a Mac. Example: A user could not quickly transfer a note from an iPad to their personal phone or laptop. Disable iCloud Backup, Disable iCloud Document Sync These services backup information to Apple's servers. With these enabled, you may put PHI at risk. Example: A clinician takes notes about a patent using the iPad. If these are backed up using iCloud and contain PHI, that user has violated HIPAA. Disable Erase Content and Settings This makes it so users cannot erase the device and change its settings later. Example: A malicious user could not steal an iPad, erase it, and reconfigure it. Disable “Enable Restrictions" This prevents your users from modifying the device's restrictions. Example: A malicious user could not remove your security policies and reconfigure the device for their own use. Disable Account Modification This prevents users from adding their own email, calendar, and social media accounts to the device. Example: A clinician could not add their Twitter or Facebook account to the device. Disable Device Name Modification If you have a naming standard for your devices, this prevents users from changing the device's name. Example: "iPad - Nursing Homes" could not be changed to "John's iPad". Security and Legal Considerations Is a sign in required every time a resource is viewed? A user can stay signed in to the App for up to two weeks. Staying signed in is designed to provide quick access to the ACP Decisions content library and save health care professionals' time. Does a patient need to provide consent each time a resource is viewed? Each time a resource is viewed, a consent form will appear. This must be agreed to by the patient directly viewing the video or PDF. Does the App store any private healthcare data on the device? No private health care information is stored on the iOS device even when a user is signed in. When a patient or health care professional signs in, the user enters their email address and a password. These credentials are securely sent to My ACP Servers using industry standard SSL encryption and sensitive information is viewed remotely. The app stays signed in using an anonymous session returned by the server. App Network and Bandwidth Considerations 20 MB of space to download. A 2-Page PDF requires 400 KB. The ACP Tools App includes a screen for managing downloads. This lists each resource and the space required to download. If the download is too big for the remaining space on the device, an alert informing the user of the issue will be shown. Additionally, the download management screen can be used to remove downloaded resources from the device to free up space.
https://docs.app.acpdecisions.org/article/163-setting-up-and-managing-devices
2021-04-10T15:11:47
CC-MAIN-2021-17
1618038057142.4
[]
docs.app.acpdecisions.org
SampleSets are still a work in progress, and much of the documentation here is not currently applicable in production. Watch your inbox for notifications from KBase Outreach when these workflows go live. If you are interested in beta testing these Apps, please contact us at [email protected] In KBase, a Sample is an entity that represents some measure material from an experiment. It has a unique ID internal to KBase, aliases, related samples, and structured metadata. Samples only occur within a Narrative as a SampleSet, but the same Sample may be shared by multiple SampleSets. One key feature of KBase if the ability to link the same data across many users and analyses. To do this, sample aliases allow the same data uploaded by multiple people to be linked and connect different analyses of the same data. These aliases can also be used to link Samples in KBase with other methods of sample registration, such as ESS-DIVE or SESAR. Because the metadata about a sample is critically important for, KBase provides the option to upload metadata spreadsheets to link to Samples. This allows for more detailed analysis across Samples, as shown below. As with all data in KBase, Samples data must be uploaded to the staging area and then imported into a Narrative. When importing new Samples, the default action is to create a new SampleSet. Every Sample must belong to a SampleSet, even if that SampleSet contains only one Sample. Currently, all samples must be formatted to fit SESAR or Enigma formats. This table demonstrates how such an upload might be formatted: When analyzing amplicon data, the consensus sequence for each OTU must be included for some steps of analysis. This data should be added as a separate FASTA file where each sequence ID exactly matches the OTU ID in the amplicon matrix. Like other data in the Samples workflows, chemical abundance data must be formatted to ensure samples are correctly linked. To this end, the Create Chemical Abundance Matrix Template creates an Excel spreadsheet for direct download that can be populated with chemical abundance data. Once imported, the SampleSet viewer widget allows you to see the samples data that has been imported. This includes all the metadata such as collection information that was uploaded, and can be selectively viewed using the search bar. More details for the SampleSets can be found on the landing page. Click on the SampleSet name in the viewer widget or the binoculars button in the data pane to view the landing page. This lists Samples can be used as a basis to generate OTU sheet or chemical abundance templates that can be used for amplicon or chemical abundance analysis, respectively, in KBase. This allows the amplicon and chemical abundance analysis to remain linked to the sample metadata and enrich analysis. SampleSets currently must be uploaded in SESAR or ENIGMA formats, but more formats may become available in the future. For more information on the SESAR format, please see their website. To preserve SampleSet content, the default setting is for the SampleSets to only be modifiable by the original uploader. The Update SampleSet Access Controls App allows other users to be added, after which they can add Samples to the set using the Import Samples App. As previously stated, amplicons must include consensus sequences and an amplicon matrix. There are a number of options for generating these matrices, such as QIIME and mothur. We recommend uploading the raw count matrices rather than rarefied or nomalized. KBase can perform these steps on-system, and doing so maintains data provenance for enhanced reproducibility. The Transform Matrix and Rarefy Matrix Apps allow you to perform these steps while maintaining the original matrices as a KBase object and creating a provenance graph to show how the data was transformed. Taxonomies can be assigned using the RDP Classifier App based on the consensus sequences that are supplied. Then, function can be assigned with either FAPROTAX or PICRUSt2. From there, additional statistical comparisons can be calculated, such as clustering (hierarchical or k-means), PCA analysis, and more. For a full catalog of Apps, see the utilities section of the App Catalog or filter Apps by input within the Narrative. Once metabolomics data are uploaded, they can be can be mapped using Escher. Additional statistical analysis of the chemical abundance attribute maps, such as PCA and clustering, can be performed. Apps can be filtered as with the amplicon analysis above. PickAxe can be used if you have a metabolic model in the Narrative. PickAxe applies SMARTS reaction rules to generated novel compounds. The result is a directed graph that branches from a small number of source compounds to a large number of daughter compounds
https://docs.kbase.us/development/samples-and-samplesets
2021-04-10T13:54:15
CC-MAIN-2021-17
1618038057142.4
[]
docs.kbase.us
The new designate::keystone::authtoken::interface parameter has been added, which can be used to set the interface parameter in authtoken middleware. designate::keystone::authtoken::interface Add support for PostgreSQL database backend via designate::db::postgresql database_min_pool_size option is now deprecated for removal, the parameter has no effect. New parameter designate::keystone::authtoken::service_token_roles is introduced so that specific role can be assigned to the servide user who can use service token feature. The designate::pool_manager, designate::pool_manager_cache::memcache and designate::pool_manager_cache::sqlalchemy class is now removed. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/releasenotes/puppet-designate/ussuri.html
2021-04-10T14:39:20
CC-MAIN-2021-17
1618038057142.4
[]
docs.openstack.org
No Module Named ipy_user_conf¶ If you enter QuantumATK in interactive mode (i.e., if you just type “atkpython” on the command line), you may be greeted with a rather lengthy message, asking you to “upgrade”: ImportError Traceback (most recent call last) /opt/QuantumWise/atk-10.8/atkpython/lib/site-packages/IPython/ipmaker.pyc in force_import(modname, force_reload) 61 reload(sys.modules[modname]) 62 else: ---> 63 __import__(modname) 64 65 ImportError: No module named ipy_user_conf WARNING: /home/username/ipy_user_conf.py does not exist, please run %upgrade! WARNING: Loading of ipy_user_conf failed. As the message suggests, enter the command (while you are still in QuantumATK) %upgrade to resolve the issue. The reason is that an existing configuration from a previous version of IPython (either an older QuantumATK or another Python) exists for the user account. Upgrading in this way will not interfere with those older versions.
https://docs.quantumatk.com/faq/faq_installation_nomodule.html
2021-04-10T15:00:06
CC-MAIN-2021-17
1618038057142.4
[]
docs.quantumatk.com
Integrating Remedy SSO with TrueSight Presentation Server The TrueSight Presentation Server integrates with Remedy Single Sign-On to authenticate the TrueSight products that are registered with the Presentation Server. After registering Remedy Single Single Sign-On with the Presentation Server, you can configure some of the Remedy Single Sign-On settings from the TrueSight console. Note - You can also configure an existing Remedy Single Sign-On that is integrated with other BMC products to work the Presentation Server. - If you are configuring the PostgreSQL database to work with the Presentation Server and Remedy Single Sign-On, you must have the PostgreSQL version 9.6.03 or later installed. - For a POC deployment of the Presentation Server with a Remedy Single Sign-On on the same machine, during installation of the Presentation Server ensure that you have assigned different port numbers for Tomcat and PostgreSQL. To update the Remedy SSO configuration from the TrueSight console You can update the protocol, port number, and Admin password changes done to the Remedy SSO server settings from the TrueSight console. - From the TrueSight console, select Administration > Remedy SSO. Click the action menu for Configure Remedy SSO with TrueSight Presentation Server and select Edit. The Remedy SSO Integration page displays the following fields: - Click Test Connection to verify the connection to the Remedy Single Sign-On server. See the notification to ensure the connection was successful. - From the command prompt window, run the following commands to restart the TrueSight Presentation Server: tssh server stop tssh server start To update the Remedy SSO configuration using the tssh commands You can configure the protocol, port number, and password for the Remedy SSO settings using the TrueSight Presentation Server tssh commands. - Open a command prompt window and navigate to the following folder: - (Windows) <installationDirectory>\truesightpserver\bin - (Linux) <installationDirectory>/truesightpserver/bin - Run the following commands to change the configurations: tssh properties set bmc.sso.protocol <protocol> tssh properties set bmc.sso.port <portNumber> tssh properties set bmc.sso.password <new password> encrypt - Run the following command to apply the changes: tssh properties reload - From the command prompt window, run the following commands to restart the TrueSight Presentation Server: tssh server stop tssh server start Related topic Installing the Presentation Server Related video Click the image to view the video. Related blogs in BMC Communities TrueSight 11.00 & Remedy Single Sign-on 9.1.03.01 Install & Configuration New supported authentications for Truesight 11.x with Remedy Single Sign-On
https://docs.bmc.com/docs/rsso1808/integrating-remedy-sso-with-truesight-presentation-server-820495102.html
2021-04-10T14:44:01
CC-MAIN-2021-17
1618038057142.4
[]
docs.bmc.com
Important These docs are for setting up SSO for users on our original user model. For SSO for users on New Relic One user model, see Authentication domains. Single Sign On (SSO) allows a computer user to log in to multiple systems via a single portal. If you are a New Relic account Owner setting up SSO integration for your organization, you must obtain a SAML certificate that identifies the SSO login URL (and possibly logout URL) for your organization. The other types of information required for SSO integration will vary depending on the SAML service provider being used. Requirements Requirements include: - These docs apply for managing users on our original user model. For SSO for users on New Relic One user model, see Authentication domains. - Access to this feature depends on your subscription level. - Owner user role required Providers supported by New Relic For a list of the SAML service providers that New Relic currently supports for SSO integration: From the New Relic title bar, select (account dropdown) > Account settings > Security and authentication > Single sign on. Providers include: - Active Directory Federation Services (ADFS) - Auth0 - Azure AD (Microsoft Azure Active Directory) - Okta - OneLogin - Ping Identity - Salesforce - Generic support for SSO systems that use SAML 2.0 SAML information in New Relic account To integrate with an SAML provider, the provider will need information from you about your New Relic account. Most of the information you will need is visible in your New Relic account on the Single Sign On page, such as: - Metadata URL: Contains multiple pieces of information in a single XML message - SAML version: 2.0 - Assertion consumer URL: The endpoint to New Relic SSO (for example,) - Consumer binding: Transmission method is HTTP-POST - NameID format: Email address - Attributes: None required - Entity ID: Account URL (default of rpm.newrelic.com) New Relic SAML requirements For SAML providers and service providers like New Relic to be able to work together, their processes must align in certain ways. Here are some aspects of how New Relic implements SSO integration. This will be useful if you are verifying that a specific SAML provider will be able to work with New Relic or if you are troubleshooting implementation problems..
https://docs.newrelic.com/docs/accounts/accounts/saml-single-sign/saml-service-providers/
2021-04-10T15:04:09
CC-MAIN-2021-17
1618038057142.4
[]
docs.newrelic.com
Request Tracing in Payara Micro Since Payara Micro 4.1.1.164 The Request Tracing service works the same on Payara Micro and will trace the same events described in its documentation. Usage Payara Micro exposes 3 command line options to configure request tracing. To be explicit about the desired configuration, all three arguments can be used, but this is not necessary since the command to enable tracing can also accept explicit configuration options. Since all three options can be specified with an extra parameter following the --enableRequestTracing argument as a shorthand, a detailed summary of this shorthand usage is given below. Configuration Options
https://docs.payara.fish/documentation/payara-micro/services/request-tracing.html
2018-09-18T15:06:48
CC-MAIN-2018-39
1537267155561.35
[]
docs.payara.fish
Shader Compiler Proxy Some mobile devices may be connected via a USB TCP/IP tunnel and may not have direct network access to a shader compiler server. The shader compiler proxy component in Lumberyard allows such devices to forward shader compiler requests through the Asset Processor connection. This proxy connection only works for connecting to the shader compiler server on that protocol. It is not a general purpose network bridge or tunnel. To use the shader compiler proxy, open the system_ file and modify the following values: assetsplatform.cfg r_ShaderCompilerServer =– Sets the location of the shader compiler server as seen from the computer running IP address of shader compiler server AssetProcessor.exe. For example, localhost could be used if both the Asset Processor and the shader compiler server are running on the same computer. r_ShadersRemoteCompiler = 1– Compiles shaders remotely. r_AssetProcessorShaderCompiler = 1– Routes shader compiler through the Asset Processor. If not set to 1, the device attempts to directly connect to the shader compiler server through the IP set.
https://docs.aws.amazon.com/lumberyard/latest/userguide/asset-pipeline-shader-compiler.html
2018-09-18T16:14:55
CC-MAIN-2018-39
1537267155561.35
[]
docs.aws.amazon.com
Leverage business rules In this section, you’ll learn how to add decision automation to your process by using BPMN 2.0 Business Rule Tasks and DMN 1.1 Decision Tables. Add a Business Rule Task to the Process Use the Camunda Modeler to open the Payment Retrieval process then click on the Approve Payment Task. Change the activity type to Business Rule Task in the wrench button menu. Next, link the Business Rule Task to a DMN table by changing Implementation to DMN and Decision Ref to approve-payment in the properties panel. In order to retrieve the result of the evaluation and save it automatically as a process instance variable in our process, we also need to change the Result Variable to approved and use singleEntry as the Map Decision Result in the properties panel. Save your changes and deploy the updated process using the Deploy Button in the Camunda Modeler. Create a DMN table using the Camunda Modeler Create a new DMN table by clicking File > New File > DMN Table. Specify the DMN table First, give the DMN table the name Approve Payment and the ID approve-payment. The DMN table ID must match the Decision Ref in your BPMN process. Next, specify the input expressions for the DMN table. In this example, we’ll decide whether a payment is approved based on the item name. Your rules can also make use of the FEEL Expression Language, JUEL or Script. If you like, you can read more about Expressions in the DMN Engine. For the input column, use item as the Input Expression and Item as the Input Label: Next, set up the output column. Use approved as the Output Name and Approved as the Output Label for the output column “Approved”: Let’s create some rules by clicking on the plus icon on the left side of the DMN table. We should also change the Output Column to the Data Type boolean: After setup, your DMN table should look like this: Deploy the DMN table To deploy the Decision Table, click on the Deploy button in the Camunda Modeler, give it Deployment Name “Payment Retrieval Decision”, then hit the Deploy button. Verify the Deployment with Cockpit Now, use Cockpit to see if the decision table was successfully deployed. Go to. Log in with the credentials demo / demo. Navigate to the “Decisions” section. Your decision table Approve Payment should be listed as deployed decision definition. Inspect using Cockpit and Tasklist Next, use Tasklist to start two new Process Instances and verify that depending on your input, the Process Instance will be routed differently. To do so, go to. Log in with demo / demo. Click on the button to start a process instance and choose the Payment process. Use the generic form to add the variables as follows: Hit the Start Instance button. Next, click again on the button to start another process instance and choose the Payment process. Use the generic form to add the variables as follows: You’ll see that depending on the input, the worker will either charge or not charge the credit card. You can also verify that the DMN tables were evaluated by using Camunda Cockpit. Go to. Log in with the credentials demo / demo. Navigate to the “Decisions” section and click on Approve Payment. Check the different Decision Instances that were evaluated by clicking on the ID in the table. A single DMN table that was executed could look like this in Camunda Cockpit: Success! Congratulations! You’ve successfully completed the Camunda BPM Quick Start. Ready to continue? We recommend the Camunda BPM documentation. Catch up: Get the Sources of Step-5Or download as archive from here.
https://docs.camunda.org/get-started/quick-start/decision-automation/
2018-09-18T16:07:35
CC-MAIN-2018-39
1537267155561.35
[array(['../img/modeler-businessrule-task1.png', None], dtype=object) array(['../img/modeler-businessrule-task2.png', None], dtype=object) array(['../img/modeler-new-dmn-table.png', None], dtype=object) array(['../img/modeler-dmn1.png', None], dtype=object) array(['../img/modeler-dmn2.png', None], dtype=object) array(['../img/modeler-dmn3.png', None], dtype=object) array(['../img/modeler-dmn4.png', None], dtype=object) array(['../img/modeler-dmn5.png', None], dtype=object) array(['../img/modeler-dmn6.png', None], dtype=object) array(['../img/cockpit-approve-payment.png', None], dtype=object) array(['../img/tasklist-dmn1.png', None], dtype=object) array(['../img/tasklist-dmn2.png', None], dtype=object) array(['../img/cockpit-dmn-table.png', None], dtype=object)]
docs.camunda.org
Functions Functions and functional programming in Perl 6 Routines are one of the means Perl 6 has to reuse code. They come in several forms, most notably methods, which belong in classes and roles and are associated with an object; and functions (also called subroutines or subs, for short), which can be called independently of objects. Subroutines default to lexical ( my) scoping, and calls to them are generally resolved at compile time. Subroutines can have a signature, also called parameter list, which specifies which, if any, arguments the signature expects. It can specify (or leave open) both the number and types of arguments, and the return value. Introspection on subroutines is provided via Routine. Defining/Creating/Using functions Subroutines» Blocks and lambdas. Signatures The parameters that a function accepts are described in its signature. sub format(Str $s) { ... } -> $a, $b { ... } Details about the syntax and use of signatures can be found in the documentation on the Signature class. Automatic signatures If no signature is provided but either of the two automatic variables @_ or %_ are used in the function body, a signature with *@_ or *%_ will be generated. Both automatic variables can be used at the same time. sub s ;say .signature # OUTPUT: «(*@_, *%_)␤» Arguments Arguments are supplied as a comma separated list. To disambiguate nested calls, use parentheses: sub f(); # call the function reference c with empty parameter listsub g();say(g(42), 45); # pass only 42 to g() When calling a function, positional arguments should be supplied in the same order as the function's signature. Named arguments may be supplied in any order, but it's considered good form to place named arguments after positional arguments. Inside the argument list of a function call, some special syntax is supported: sub f(|c);f :named(35); # A named argument (in "adverb" form)f named => 35; # Also a named argumentf :35named; # A named argument using abbreviated adverb formf 'named' => 35; # Not a named argument, a Pair in a positional argumentmy \c = <a b c>.Capture;f |c; # Merge the contents of Capture $c as if they were supplied Arguments passed to a function are conceptually first collected in a Capture container. Details about the syntax and use of these containers can be found in the documentation on the Capture class. When using named arguments, note that normal List "pair-chaining" allows one to skip commas between named arguments. sub f(|c);f :dest</tmp/foo> :src</tmp/bar> :lines(512);f :32x :50y :110z; # This flavor of "adverb" works, toof :a:b:c; # The spaces are also optional. Return values Any Block or Routine will provide the value of its last expression as a return value to the caller. If return or return-rw are called their parameter, if any, will become the return value. The default return value is Nil. sub a ;sub b ;b;# OUTPUT: «42␤» Multiple return values are returned as a list or by creating a Capture. Destructuring can be used to untangle multiple return values. sub a ;put a.perl;# OUTPUT: «(42, "answer")␤»my (, ) = a;put [, ];# OUTPUT: «answer 42␤»sub b ;put b.perl;# OUTPUT: «\("a", "b", "c")␤» Return type constraints Perl 6 has many ways to specify a function's return type: sub foo(--> Int) ; say .returns; # OUTPUT: «(Int)␤» sub foo() returns Int ; say .returns; # OUTPUT: «(Int)␤» sub foo() of Int ; say .returns; # OUTPUT: «(Int)␤» my Int sub foo() ; say .returns; # OUTPUT: «(Int)␤» Attempting to return values of another type will cause a compilation error. sub foo() returns Int ; foo; # Type check fails. Conventions and idioms While the dispatch system described above provides a lot of flexibility, there are some conventions that most internal functions, and those in many modules, will follow. Slurpy conventions Perhaps the most important one of these conventions is the way slurpy list arguments are handled. Most of the time, functions will not automatically flatten slurpy lists. The rare exceptions are those functions that don't have a reasonable behavior on lists of lists (e.g., chrs) or where there is a conflict with an established idiom (e.g., pop being the inverse of push). If you wish to match this look and feel, any Iterable argument must be broken out element-by-element using a **@ slurpy, with two nuances: An Iterable inside a Scalar container doesn't count. Lists created with a ,at the top level only count as one Iterable. This can be achieved by using a slurpy with a + or +@ instead of **: sub grab(+) which is shorthand for something very close to: multi sub grab(**)multi sub grab(\a) This results in the following behavior, which is known as the "single argument rule" and is important to understand when invoking slurpy functions: grab(1, 2); # OUTPUT: «grab 1␤grab 2␤»grab((1, 2)); # OUTPUT: «grab 1␤grab 2␤»grab($(1, 2)); # OUTPUT: «grab 1 2␤»grab((1, 2), 3); # OUTPUT: «grab 1 2␤grab 3␤» This also makes user-requested flattening feel consistent whether there is one sublist, or many: grab(flat (1, 2), (3, 4)); # OUTPUT: «grab 1␤grab 2␤grab 3␤grab 4␤»grab(flat $(1, 2), $(3, 4)); # OUTPUT: «grab 1 2␤grab 3 4␤»grab(flat (1, 2)); # OUTPUT: «grab 1␤grab 2␤»grab(flat $(1, 2)); # OUTPUT: «grab 1␤grab 2␤» It's worth noting that mixing binding and sigilless variables in these cases requires a bit of finesse, because there is no Scalar intermediary used during binding. my = (1, 2); # Normal assignment, equivalent to $(1, 2)grab(); # OUTPUT: «grab 1 2␤»my := (1, 2); # Binding, $b links directly to a bare (1, 2)grab(); # OUTPUT: «grab 1␤grab 2␤»my \c = (1, 2); # Sigilless variables always bind, even with '='grab(c); # OUTPUT: «grab 1␤grab 2␤» Functions are first-class objects Functions and other code objects can be passed around as values, just like any other object. There are several ways to get hold of a code object. You can assign it to a variable at the point of declaration: my = sub (Numeric )# and then use it:say (6); # OUTPUT: «36␤» Or you can reference an existing named function by using the &-sigil in front of it. sub square() ;# get hold of a reference to the function:my = This is very useful for higher order functions, that is, functions that take other functions as input. A simple one is map, which applies a function to each input element: sub square() ;my = map , 1..5;say join ', ', ; # OUTPUT: «1, 4, 9, 16, 25␤» Infix form To call a subroutine with 2 arguments like an infix operator, use a subroutine reference surrounded by [ and ]. sub plus ;say 21 [] 21;# OUTPUT: «42␤» Closures All code objects in Perl 6 are closures, which means they can reference lexical variables from an outer scope. sub generate-sub()my = generate-sub(21);(); # OUTPUT: «42␤» Here, $y is a lexical variable inside generate-sub, and the inner subroutine that is returned uses it. By the time that inner sub is called, generate-sub has already exited. Yet the inner sub can still use $y, because it closed over the variable. Another closure example is the use of map to multiply a list of numbers: my = 5;say join ', ', map , 1..5; # OUTPUT: «5, 10, 15, 20, 25␤» Here, the block passed to map references the variable $multiply-by from the outer scope, making the block a closure. Languages without closures cannot easily provide higher-order functions that are as easy to use and powerful as map. Routines Routines are code objects that conform to type Routine, most notably Sub, Method, Regex and Submethod. They carry extra functionality in addition to what a Block supplies: they can come as multis, you can wrap them, and exit early with return: my = set <if for unless while>;sub has-keyword(*)say has-keyword 'not', 'one', 'here'; # OUTPUT: «False␤»say has-keyword 'but', 'here', 'for'; # OUTPUT: «True␤» Here, return doesn't just leave the block inside which it was called, but the whole routine. In general, blocks are transparent to return, they attach to the outermost routine. Routines can be inlined and as such provide an obstacle for wrapping. Use the pragma use soft; to prevent inlining to allow wrapping at runtime. sub testee(Int , Str )sub wrap-to-debug()my = wrap-to-debug();# OUTPUT: «wrapping testee with arguments :(Int $i, Str $s)»say testee(10, "ten");# OUTPUT: «calling testee with \(10, "ten")␤returned from testee with return value "6.151190ten"␤6.151190ten».unwrap();say testee(10, "ten");# OUTPUT: «6.151190ten␤» Defining operators Operators are just subroutines with funny names. The funny names are composed of the category name ( infix, prefix, postfix, circumfix, postcircumfix), followed by a colon, and a list of the operator name or names (two components in the case of circumfix and postcircumfix). This works both for adding multi candidates to existing operators and for defining new ones. In the latter case, the definition of the new subroutine automatically installs the new operator into the grammar, but only in the current lexical scope. Importing an operator via use or import also makes it available. # adding a multi candidate to an existing operator:multi infix:<+>(Int , "same") ;say 21 + "same"; # OUTPUT: «42␤»# defining a new operatorsub postfix:<!>(Int where ) ;say 6!; # OUTPUT: «720␤» The operator declaration becomes available as soon as possible, so you can recurse into a just-defined operator: sub postfix:<!>(Int where )say 6!; # OUTPUT: «720␤» Circumfix and postcircumfix operators are made of two delimiters, one opening and one closing. sub circumfix:<START END>(*)say START 'a', 'b', 'c' END; # OUTPUT: «(start [a b c] end)␤» Postcircumfixes also receive the term after which they are parsed as an argument: sub postcircumfix:<!! !!>(, )say 42!! 1 !!; # OUTPUT: «42 -> ( 1 )␤» Blocks can be assigned directly to operator names. Use a variable declarator and prefix the operator name with a &-sigil. my :<ieq> = -> |l ;say "abc" ieq "Abc";# OUTPUT: «True␤» Precedence Operator precedence in Perl 6 is specified relatively to existing operators. The traits is tighter, is equiv and is looser can be provided with an operator to indicate how the precedence of the new operators is related to other, existing ones. More than one trait can be applied. For example, infix:<*> has a tighter precedence than infix:<+>, and squeezing one in between works like this: sub infix:<!!>(, ) is tighter(:<+>)say 1 + 2 * 3 !! 4; # OUTPUT: «21␤» Here, the 1 + 2 * 3 !! 4 is parsed as 1 + ((2 * 3) !! 4), because the precedence of the new !! operator is between that of + and *. The same effect could have been achieved with: sub infix:<!!>(, ) is looser(:<*>) To put a new operator on the same precedence level as an existing operator, use is equiv(&other-operator) instead. Associativity When the same operator appears several times in a row, there are multiple possible interpretations. For example: 1 + 2 + 3 could be parsed as (1 + 2) + 3 # left associative or as 1 + (2 + 3) # right associative For addition of real numbers, the distinction is somewhat moot, because + is mathematically associative. But for other operators it matters a great deal. For example, for the exponentiation/power operator, infix:<**> : say 2 ** (2 ** 3); # OUTPUT: «256␤»say (2 ** 2) ** 3; # OUTPUT: «64␤» Perl 6 has the following possible associativity configurations: You can specify the associativity of an operator with the is assoc trait, where left is the default associativity. sub infix:<§>(*) is assoc<list>say 1 § 2 § 3; # OUTPUT: «(1|2|3)␤» Traits Traits are subroutines that run at compile time and modify the behavior of a type, variable, routine, attribute, or other language object. Examples of traits are: is ParentClass# ^^ trait, with argument ParentClasshas is rw;# ^^^^^ trait with name 'rw'does AnotherRole# ^^^^ traithas handles <close>;# ^^^^^^^ trait ... and also is tighter, is looser, is equiv and is assoc from the previous section. Traits are subs declared in the form trait_mod<VERB> , where VERB stands for the name like is, does or handles. It receives the modified thing as argument, and the name as a named argument. See Sub for details. multi sub trait_mod:<is>(Routine , :!)sub square() is doublessay square 3; # OUTPUT: «18␤» See type Routine for the documentation of built-in routine traits. Re-dispatching There are cases in which a routine might want to call the next method from a chain. This chain could be a list of parent classes in a class hierarchy, or it could be less specific multi candidates from a multi dispatch, or it could be the inner routine from a wrap. Fortunately, we have a series of re-dispatching tools that help us to make it easy. sub callsame callsame calls the next matching candidate with the same arguments that were used for the current candidate and returns that candidate's return value. proto a(|)multi a(Any )multi a(Int )a 1; # OUTPUT: «Int 1␤Any 1␤Back in Int with 5␤» sub callwith callwith calls the next candidate matching the original signature, that is, the next function that could possibly be used with the arguments provided by users and returns that candidate's return value. proto a(|)multi a(Any )multi a(Int )a 1; # OUTPUT: «Int 1␤Any 2␤Back in Int with 5␤» Here, a 1 calls the most specific Int candidate first, and callwith re-dispatches to the less specific Any candidate. Note that although our parameter $x + 1 is an Int, still we call the next candidate in the chain. ));. sub nextsame nextsame calls the next matching candidate with the same arguments that were used for the current candidate and never returns. proto a(|)multi a(Any )multi a(Int )a 1; # OUTPUT: «Int 1␤Any 1␤» sub nextwith nextwith calls the next matching candidate with arguments provided by users and never returns. proto a(|)multi a(Any )multi a(Int )a 1; # OUTPUT: «Int 1␤Any 2␤» sub samewith samewith calls current candidate again with arguments provided by users and returns return value of the new instance of current candidate. proto a(|)multi a(Int )say (a 10); # OUTPUT: «36288002␤» sub nextcallee Redispatch may be required to call a block that is not the current scope what provides nextsame and friends with the problem to referring to the wrong scope. Use nextcallee to capture the right candidate and call it at the desired time. proto pick-winner(|)multi pick-winner (Int \s)multi pick-winnerwith pick-winner ^5 .pick -> \result# OUTPUT:# And the winner is...# Woot! 3 won The Int candidate takes the nextcallee and then fires up a Promise to be executed in parallel, after some timeout, and then returns. We can't use nextsame here, because it'd be trying to nextsame the Promise's block instead of our original routine. Wrapped routines Besides those are mentioned above, re-dispatch is helpful in more situations. One is for dispatching to wrapped routines: # enable wrapping:use soft;# function to be wrapped:sub square-root().wrap(sub ());say square-root(4); # OUTPUT: «2␤»say square-root(-4); # OUTPUT: «0+2i␤» Routines of parent class Another use case is to re-dispatch to methods from parent classes. is Versionsay LoggedVersion.new('1.0.2'); Coercion types Coercion types force a specific type for routine arguments while allowing the routine itself to accept a wider input. When invoked, the arguments are narrowed automatically to the stricter type, and therefore within the routine the arguments have always the desired type. In the case the arguments cannot be converted to the stricter type, a Type Check error is thrown. sub double(Int(Cool) )say double '21'; # OUTPUT: «42␤»say double 21; # OUTPUT: «42␤»say double Any; # Type check failed in binding $x; expected 'Cool' but got 'Any' In the above example, the Int is the target type to which the argument $x will be coerced, and Cool is the type that the routine accepts as wider input. If the accepted wider input type is Any, it is possible to abbreviate the coercion Int(Any) omitting the Any type, thus resulting in Int(). The coercion works by looking for a method with the same name as the target type: if such method is found on the argument, it is invoked to convert the latter to the expected narrow type. From the above, it is clear that it is possible to provide coercion among user types just providing the required methods: # wants a Bar, but accepts Anysub print-bar(Bar() )print-bar Foo.new; In the above code, once a Foo instance is passed as argument to print-bar, the Foo.Bar method is called and the result is placed into $bar. Coercion types are supposed to work wherever types work, but Rakudo currently ) ) This MAIN is defining two kind of aliases, as explained in Signatures: :file($data) aliases the content passed to the command-line parameter --file= to the variable $data; :v(:$verbose) not only aliases v to verbose, but also creates a new command line parameter verbose thanks to the specification of the :. In fact, since this is an alias, both verbose and v can use single or double dashes ( - or --). With file.dat present, this will work this way $ perl6 Main.p624file.datVerbosity off Or this way with -v or --verbose $ perl6 Main.p6 -v24file.datVerbosity on %*SUB-MAIN-OPTS It's possible to alter how arguments are processed before they're passed to sub MAIN {} by setting options in %*SUB-MAIN-OPTS hash. Due to the nature of dynamic variables, it is required to set up %*SUB-MAIN-OPTS hash and fill it with the appropriate settings. For instance: my =:named-anywhere, # allow named variables at any location:!foo, # don't allow foo Unit-scoped definition of MAIN If the entire program.
https://docs.perl6.org/language/functions.html
2018-09-18T16:03:21
CC-MAIN-2018-39
1537267155561.35
[]
docs.perl6.org
Running tests¶ Deluge testing is implemented using Trial which is Twisted’s testing framework and an extension of Python’s unittest. See Twisted website for documentation on Twisted Trial and Writing tests using Trial. Testing¶ The tests are located in the source folder under deluge/tests. The tests are run from the project root directory. View the unit test coverage at: deluge-torrent.github.io Trial¶ Here are some examples that show running all the tests through to selecting an individual test. trial deluge trial deluge.tests.test_client trial deluge.tests.test_client.ClientTestCase trial deluge.tests.test_client.ClientTestCase.test_connect_localclient Pytest¶ pytest deluge/tests pytest deluge/tests/test_client.py pytest deluge/tests/test_client.py -k test_connect_localclient Tox¶ All the tests for Deluge can be run using Tox
https://deluge.readthedocs.io/en/latest/contributing/testing.html
2020-05-25T03:52:42
CC-MAIN-2020-24
1590347387219.0
[]
deluge.readthedocs.io
Date: Mon, 25 May 2020 05:04:31 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8690_1660273982.1590383071038" ------=_Part_8690_1660273982.1590383071038 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Live Forms provides an add-on that yo= u can install into Confluence to create sophisticated forms and workflows i= ntegrated with Confluence. The add-on requires installation of an in-house = Live Forms server. If you are using Confluence server v5.6.x, however, you must use <= /span>Live Forms server v6.1.1+ Refer to the Confluence Add on Release Notes for complete compatibility informa= tion. On This Page: We strongly recommend using Universal add-on Manager (UPM). Please = follow the A= tlassian instructions for adding an add-on. Search for = the frevvo Live Forms and Workflows for Confluence add-on in UPM, select it= and click Install. The add-on will be installed automatically. If you cannot use UPM, you can download the = Live Forms add-on from the Atlassian Marketplace. There are ser= veral Live Forms add-on versions av= ailable for Confluence. The add-on version required depends on your Conflue= nce server version number. See the Live Fo= rms = add-on compatibility matrix for details. Then, fo= llow Atlassian instructions for installing the add-on into Conflue= nce. Remember to first uninstall any previous version. You must first configure the add-on before you can use it. The image bel= ow shows the potential configurations available for the Live Forms Confluence add-on. The image is large so that th= e text is readable. In most cases, you will not need to specify these confi= guration parameters and can leave them blank. Click Configure. This is the scheme, host & port on which the forms server is running= . The add-on (installed in Confluence) must be able to access the Live Forms server. For example, the URL to y= our in-house installation might be:. Enter the tenant admin user for the tenant you created. Remember that yo= u must enter it using the syntax userid@tenant, e.g. [email protected] Type in the above admin user's password. You can specify a Confluence group here. If specified, only users belong= ing to this group can design and add forms, flows or submissions to Conflue= nce pages. If this is left blank, all Confluence users will be able to = ; add forms, flows or submissions to Confluence pages. You can specify a Confluence group here. If specified, only users belong= ing to this group can add the Task List to Confluence pages. If this is lef= t blank, all Confluence users will be able to add the Task List to Confluen= ce pages. Note that all users can still access public forms on public Confl= uence pages. This feature is available in Live Forms v4.1.4 and higher. You can speci= fy a Confluence group here. If specified, users belonging to this group can= edit forms and flows already added to existing Confluence pages belonging = to any other user. If this is left blank, edit access is restricted to the = original designer/owner of the form/flow. Users belonging to th= t= o access Confluence may be rewritten before it is internally passed on to C= onfluence. If that's the case, Confluence URLs generated by Live Forms to perform any redirection will be wron= g since they will reflect the rewritten internal URL. You can specify the e= xternally visible URL to Confluence here. If not specified, this will be in= ferred from the HTTP Request received by Confluence. In most cases, you do not need to configure this. = p> In some configurations, the externally visible URL to the forms server m= ay be different from the above URL, i.e. the URL that a browser outside the= firewall uses to access Live Forms = ;could be different from the URL that the add-on (inside the firewall) uses= to access Live Forms. If that's t= he case, enter the externally visible URL in this configuration parameter. = Your users must be able to access the Live Form= s wi= ll config= ured, you should be able to access/create forms on Confluence pages. If checked, the add-on will print debugging information to Confluence pa= ges. In addition, for forms and flows, you can also specify a parameter to = the form macro that is generated in Confluence pages: _norender:true. If th= Live F= orms server, which may be remote in order to process your form.= If you find that the Live Forms ma= cros do not work, ask your network administrator if Confluence needs to acc= ess the Internet through a web proxy. Please refer to Configure Web Proxy Support for Confluence for further informatio= n. There are occasions where you may want to login directly to the Live For= ms server rather than via Confluence. Here are a few points to understand:<= /p> If you get this error adding a flow to a Confluence page you can solve t= his with a simple browser setting. Interne= t explorer has modified this page to prevent cross site scripting=20 Go to IE Tools -> Internet options -> Security tab -> select cu= stom level -> disable XSS Filter
https://docs.frevvo.com/d/exportword?pageId=18588395
2020-05-25T05:04:31
CC-MAIN-2020-24
1590347387219.0
[]
docs.frevvo.com
What versions of Microsoft Dynamics does our Dynamics CRM element support? - OnPrem Internet Facing Deployments: Dynamics CRM 2011, 2013, 2015, 2016 - Cloud Versions: Dynamics CRM Online 2016, Dynamics 365 - OnPrem G2C (non-internet facing): Not Supported Today Does our Dynamics CRM element support on-prem deployments? - Yes, if it is deployed through an IFD (Internet Facing Deployment) - and if so, no IP addresses are necessary - SDK 7 supported Can we use Ground2Cloud if it is not an IFD? - No, currently Ground2Cloud cannot support Dynamics On-Prem deployments that are not IFD - Dynamics does not use Ground2Cloud as it uses multiple undocumented ports underneath to perform authentication Can I use a Dynamics 365 sandbox/account with our Dynamics CRM element? Yes, Dynamics365 is backward compatible with our Dynamics CRM element. What's the URL to connect my Dynamics to Cloud Elements elements? The siteURL is the URL that's displayed when you log into Dynamics from a web browser. What's the difference between Dynamics CRM and Dynamics 365? Dynamics365 is the giant umbrella product that holds all the modules of Dynamics (a module is something like: CRM, Finance & Operations, Helpdesk etc.), and users purchase the modules they need and that’s what API access you get as a result of what you’ve purchased. Dynamics 365 offers a better, more consistent API than its legacy systems (every Dynamics CRM version 2016 and before had slightly different APIs) and should work across all Dynamics 365 versions. Dynamics 365 Sandbox Information Trial Access: Note: these trials do NOT all have API access; for example, sales does, but finance/operations does not.
https://docs.cloud-elements.com/home/ac54b3b
2020-05-25T05:17:25
CC-MAIN-2020-24
1590347387219.0
[]
docs.cloud-elements.com
Teams Chat Assist Power BI Report Import Guide This document explains how to install and run the Teams Chat Assist Power BI Report. The report provides general details of how Teams Chat Assist is being used: Who is asking questions and when, who is answering questions and how long it took to respond. Plus a log of all conversations. Note: In order to view the report you must have access the Azure Table Storage Account for your Teams Chat Assist instance. Specifically the Account name or URL and Account Key. Note: The report requires an installed and licensed version of Power BI Desktop. Import Report Start by downloading the Power BI report. In Power BI Desktop, select File > Import > Power BI template. Navigate to the location of report downloaded above, select the report and Open. You will be presented with a dialog that allows you to specify the data source for the report. You should enter your Azure Table Storage Account name or URL and select Load. - If this is the first time you have connected to this account in Power BI Desktop, another dialog is displayed. Enter your Azure Table Storage Account Key. - If your region is set to United States, then the report will be displayed. The Questions page provide an overview of how many questions have been asked, by who and when. The Agents page details who has been answering questions and how quickly they have answered the. Finally the Conversations page lists all details about every question asked. They are multiple filters providing a means for finding individual records. Note: If your region is not set to United States then you will see the following error. The reports ‘Locale for import’ setting has defaulted to an incorrect value. This can be resolved by updating your reports Locale for import. Update the reports ‘Locale for import’ setting Note: Your Power BI region only needs updating if it is not already set to United States - Select File > Options and settings > Options. Scroll down to the ‘Current File’ section and select ‘Regional Settings’. Update the ‘Locale for import’ setting to be English(United States). Press OK. - In the Home tab of the toolbar press the refresh button. The data will be reloaded and the report will now display correctly.
https://docs.modalitysoftware.com/TeamsChatAssist/powerBiReport.html
2020-05-25T05:46:01
CC-MAIN-2020-24
1590347387219.0
[]
docs.modalitysoftware.com
Generate the Submission Key¶ When a document or message is submitted to SecureDrop by a source, it is automatically encrypted with the Submission Key. The private part of this key is only stored on the Secure Viewing Station which is never connected to the Internet. SecureDrop submissions can only be decrypted and read on the Secure Viewing Station. We will now generate the Submission Key. If you aren’t still logged into your Secure Viewing Station from the previous step, boot it using its Tails USB stick, with persistence enabled. Important Do not follow these steps before you have fully configured the Secure Viewing Station according to the instructions. The private key you will generate in the following steps is one of the most important secrets associated with your SecureDrop installation. This procedure is intended to ensure that the private key is protected by the air-gap throughout its lifetime. Create the Key¶ Navigate to Applications ▸ System Tools ▸ Terminal to open a terminal . In the terminal, run gpg --full-generate-key: When it says Please select what kind of key you want, choose “(1) RSA and RSA (default)”. When it asks What keysize do you want?, type 4096. When it asks Key is valid for?, press Enter. This means your key does not expire. It will let you know that this means the key does not expire at all and ask for confirmation. Type y and hit Enter to confirm. - Next it will prompt you for user ID setup. Use the following options: - Real name: “SecureDrop” - Comment: [Your Organization's Name] SecureDrop Submission Key GPG will confirm these options. Verify that everything is written correctly. Then type Ofor (O)kayand hit enter to continue: A box will pop up (twice) asking you to type a passphrase. Since the key is protected by the encryption on the Tails persistent volume, it is safe to simply click OK without entering a passphrase. The software will ask you if you are sure. Click Yes, protection is not needed. Wait for the key to finish generating. Export the Submission Public Key¶ To manage GPG keys using the graphical interface (a program called “Passwords and Keys”), click the clipboard icon in the top right corner and select “Manage Keys”. Click “GnuPG keys” and you should see the key that you just generated. - Select the key you just generated and click “File” then “Export”. - Save the key to the Transfer Device as SecureDrop.asc, and make sure you change the file type from “PGP keys” to “Armored PGP keys” which can be switched at the bottom of the Save window. Click the ‘Export’ button after switching to armored keys. Note This is the public key only. You’ll need to provide the fingerprint of this new key during the installation. Double-click on the newly generated key and change to the Details tab. Write down the 40 hexadecimal digits under Fingerprint. Note Your fingerprint will be different from the one in the example screenshot. At this point, you are done with the Secure Viewing Station for now. You can shut down Tails, grab the Admin Workstation Tails USB and move over to your regular workstation.
https://docs.securedrop.org/en/master/generate_submission_key.html
2020-05-25T03:47:19
CC-MAIN-2020-24
1590347387219.0
[array(['_images/keyring.png', 'My Keys'], dtype=object) array(['_images/exportkey.png', 'Export Key'], dtype=object) array(['_images/exportkey2.png', 'Export Key 2'], dtype=object) array(['_images/fingerprint.png', 'Fingerprint'], dtype=object)]
docs.securedrop.org
Configure Logging Moogsoft AIOps components generate log files to report their activity. As a Moogsoft AIOps administrator, you can refer to the logs to audit system usage or diagnose issues. In certain cases you may want to change logging levels based upon your specific environment or needs. See the Log Levels Reference for details. Moogsoft AIOpsuses Apache Log4j for logging. See the Log4j configuration documentation for more information. Configure Your Log Files You can edit the log configuration files at $MOOGSOFT_HOME/config/logging/ There is a configuration file for every component or servlet in Moogsoft AIOps. These files can be found in $MOOGSOFT_HOME/config/logging/servlets/ and follow the naming convention <servlet_name>.log.json. These configuration files control the logs for the following: events.log.json: Logs for the proxy LAM. graze.log.json: Graze request logs. moogpoller.log.json: Moogpoller logs. moogsvr.log.json: Logs relating to SAML/LDAP authentication and internal API calls. situation_similarity.log.json: Situation Similarity servlet logs. toolrunner.log.json: Toolrunner servlet logs. The other default configuration files include: moog_farmd.log.json:Configures logs for Moogfarmd process. moogsoft.log.json: Configures logs for all of the utilities. integrations.log.json: Configures logs for LAMs and integrations. You can change log levels and make other configuration changes to components while they are running. Moogsoft AIOps reads any changes and applies them every two seconds. You can configure these files to meet your requirements. Refer to the Log4j documentation to see the available properties or see Log Configuration File Examples. Log Files by Component The following reference provides information about the log files for the various Moogsoft AIOps components. Apache Tomcat Log location: /usr/share/apache-tomcat/logs Primary log file: catalina.out To change the logging level for the Moogsoft AIOps servlets which run in Tomcat, edit the relevant files in $MOOGSOFT_HOME/config/logging/servlets. Nginx Log location: /var/log/nginx Primary log file: error.log To change the logging level for Nginx: Edit /etc/nginx/nginx.conf. Set the LogLevelproperty. For example to enable debug logging: LogLevel debug Restart Ngnix. Moogfarmd By default Moogfarmd and Ticketing integrations write logs into a log file stored in /var/log/moogsoft if you have write permissions for this directory. Otherwise, the logs are written to $MOOGSOFT_HOME/log. By default the log file takes the name of the HA address of the process. For example, MOO.moog_farmd.farmd_instance1.log. MOO is the default HA cluster name in $MOOGSOFT_HOME/config/system.conf. If you change it the Moogfarmd log file path changes accordingly. Restart Moogfarmd after making any of the following configuration changes. To use a custom log configuration file for Moogfarmd: Make a copy of the default Moogfarmd log configuration file and rename it, for example: cd $MOOGSOFT_HOME/config/logging cp moog_farmd.log.json mymoog_farmd.log.json Edit the new file according to your Moogfarmd logging requirements. Edit the configuration_fileproperty in the log_configsection of moog_farmd.confto point to the new file. For example: log_config: { configuration_file: "mymoogfarmd.log.json" } To change the logging level for Moogfarmd, edit the file $MOOGSOFT_HOME/config/logging/moog_farmd.log.json. For example: "configuration": { "ThresholdFilter": { "level": "trace" }, } You can also modify the log level using moog_farmd --loglevel. See Moogfarmd Reference for more information. To save Moogfarmd logs to a different location and/or filename, edit the Moogfarmd log configuration file located at $MOOGSOFT_HOME/config/logging/moog_farmd.log.json. For example: "RollingFile": { "name" : "FILE", "fileName" : "/var/log/moogsoft/Moogfarmd_test.log" } LAMs and Integrations LAMs and monitoring integrations log their processing and data ingestion to two types of log files, process and capture. Ticketing integrations do not have dedicated log files, and instead log their processing and data to var/log/moogsoft/MOO.moog_farmd.log. For more information, refer to the preceding section on Moogfarmd. Process Logs LAMs and integrations record their activities as they ingest raw data. By default these process logs are written to a log file stored in /var/log/moogsoft if the user running the LAM has write permissions for this directory. Otherwise, the logs are written to $MOOGSOFT_HOME/log. By default the log file takes the name of the LAM or integration. For example, MOO.solarwinds_lam.log. The configuration of LAM process logs is specified in a file located at $MOOGSOFT_HOME/config/logging/integrations.log.json. To specify the log configuration for a particular LAM: Make a copy of the default LAM log configuration file and rename it with the name of the LAM, for example: cd $MOOGSOFT_HOME/config/logging cp integrations.log.json solarwinds_lam.log.json Edit the file according to your LAM logging requirements. Edit the configuration_fileproperty in the log_configsection of the LAM configuration file to point to the new file. For example: log_config: { configuration_file: "$MOOGSOFT_HOME/config/logging/solarwinds_lam.log.json" } If a polling integration or LAM fails to connect to the target system using the connection details in the UI or configuration file, Moogsoft AIOps creates an alert with critical severity and writes the details to the process log. The following example shows a log file entry for a failed Zabbix Polling integration with an invalid URL: WARN : [target1][20190117 13:03:33.942 +0000] [CZabbixPollingTask.java:129] +|40001: An error response received from Zabbix REST server: [Invalid URL provided [] for User Login request]|+ The following error code raises a Moogsoft AIOps alert. The alert details are listed below: If the integration or LAM polls successfully on the next attempt, the alert is cleared. If the integration or LAM is restarted to resolve the connection issue the alert is not cleared and must be handled manually. Capture Logs In addition to process logs, all LAMs except the Logfile LAM allow you to capture the raw data they receive. This feature is disabled by default. To enable it, edit the LAM's configuration file and uncomment the capture_log property in the agent section. The default path to the capture log files is $MOOGSOFT_HOME/log/data-capture/<lam_name>.log. An example agent section in a LAM configuration file is as follows: agent: { name : "SolarWinds", capture_log : "$MOOGSOFT_HOME/log/data-capture/solarwinds_lam.log" } My SQL Log location: /var/log/mysqld.log MySQL logging defaults to the highest level. To remove warnings from the MySQL log: Edit /etc/my.cnf. Add the following line: log_warnings = 0 Restart the MySQL service. RabbitMQ Log location: /var/log/rabbitmq Refer to the RabbitMQ documentation for information on how to configure RabbitMQ. Elasticsearch Log location: /var/log/elasticsearch/elasticsearch.log. Refer to the Elasticsearch documentation for information on how to configure Elasticsearch. Hazelcast and Kryo Moogsoft AIOps uses two libraries for persistence: Hazelcast and Kryo. You can configure the logging for these components in the file $MOOGSOFT_HOME/config/logging/moog_farmd.log.json. The logging level is set to WARN by default. Logs are written to the process log file. Log Rotation Moogfarmd, LAMs and integrations use a Java-based logging utility that automatically runs at startup to prevent log files becoming unmanageably large. The utility also prevents the loss of log data when you restart Moogsoft AIOps. The utility compresses each rotated log into gzip (.gz) format and appends the filename with a date stamp. Rotated log files are retained for 40 days before they are purged. The logging utility rotates the logs when the file size reaches 500MB by default. It rotates up to 40 files by default. This is controlled in by two properties under RollingFile and Policies in $MOOGSOFT_HOME/config/logging/<component_log_file_name>.log.json. size The size limit of the log file in megabytes that triggers a log rotation. Type: Integer Default: 500M max The maximum number of files that Moogsoft AIOps can rotate. Type: Integer Default: 40 The default logger configuration appears in $MOOGSOFT_HOME/config/logging/<component_log_file_name>.log.json as follows: "Policies": { "SizeBasedTriggeringPolicy": { "size": "500M" } }, "DefaultRolloverStrategy": { "max": "40" }
https://docs.moogsoft.com/AIOps.7.3.0/configure-logging.html
2020-05-25T06:07:10
CC-MAIN-2020-24
1590347387219.0
[]
docs.moogsoft.com
Date: Sun, 24 May 2020 23:03:15 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_200474_696341538.1590386595547" ------=_Part_200474_696341538.1590386595547 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This page describes how to create artifacts for the En= terprise Service Bus (ESB). It contains the following sections: The ESB editor in Developer Studio can support ESB 4.x (which contains m= any new mediators) or ESB 3.0.1. To configure the version you want to suppo= rt, click Windows > Preferences (Eclipse &= gt; Preferences on a Mac), expand the Developer Studio category an= d click ESB, and then select the version you want to suppo= rt. If you are editing an existing ESB configuration, reload the editor to = apply the version change. When working with your ESB configurations in the Source= view instead of the Design view, you can hover your mouse= over any element (such as the definitions tag) to see the syntax of that e= lement. You can also press Ctrl + Space to get suggested auto-= completion text. For example, if you view a proxy service in the Source = view, and you remove the / from the <inSequence/><= /code> element and press Ctrl + space, the Content Assist appe= ars and lists all the options you can add at this point, including the clos= ing </inSequence> element. You can create an ESB Config Project to save all the ESB related artifac= ts such as proxy services, endpoints, sequences, and synapse configurations= . The new project has now been created in the workspace. If you browse ins= ide the project, you will see a project structure as below, with folders cr= eated for different resources such as endpoints, local entries, proxy servi= ces, and sequences. An endpoint is the address of a service where messages are sent. Typical= ly, the endpoint is the address of a proxy service, which acts as the front= end to the actual service. When you create an endpoint, you give it a name, at which point it appea= rs in the Defined Endpoint section of the tool palette in = Developer Studio. To use a defined endpoint, simply drag and drop it from t= he palette to the sequence, proxy service, or other artifact where you want= to reference it. You can create a = new endpoint or i= mport an existing endpoint from an XML file, such as a Synapse Configur= ation file. Follow these steps to create a new endpoint. Alternatively, you can import an existing endpoi= nt. Follow these steps to import an existing endpoint from an XML file (such= as a Synapse configuration file) into an ESB Config project. Alternatively= , you can create a n= ew endpoint. endpointsfolder under the ESB Config project you s= pecified, and the first endpoint is open in the editor. An inbound endpoint is a message source that can be configured dynamical= ly and resides in the server side of the WSO2 ESB. Inbound endpoints achiev= e multi tenancy support for all transports. The following types of protocols are supported by inbound endpoints: HTTP HTTPS File JMS HL7 KAFKA See the inbou= nd endpoint samples in the ESB Documentation for more information. Create an inbound endpoint in the ESB Config project you created a= bove by following the instructions below. You can create a new inbound endpoint<= /a> or import an existing inbound endpoint from an XML file, such= as a Synapse Configuration file. Follow the steps below to create a new inbound endpoint. Alternatively, = you can i= mport an existing inbound endpoint. onErrorsequence from the = tool palette as shown below: Follow the steps below to import an existing inbound endpoint from an XM= L file (such as a Synapse configuration file) into an ESB Config project. A= lternatively, you can create a new inbound endpoint. inbound-endpointsfolder und= er the ESB Config project you specified, and the first inbound endpoint is = open in the editor. A local entry is used to store static content as a key-value pair, where= the value could be a static entry such as a text string, XML code, or a UR= L. This is useful for the type of static content often found in XSLT files,= WSDL files, URLs, etc. Local entries can be referenced from mediators in E= SB mediation flows and resolved at runtime. You can create= a new local entry or import an existing local entry from an XML file, such as a Synap= se Configuration file. Follow these steps to create a new local entry. Alternatively, you can <= a href=3D"#CreatingESBArtifacts-Importingalocalentry">import an existing lo= cal entry. Follow these steps to import an existing local entry from an XML file (s= uch as a Synapse configuration file) into an ESB Config project. Alternativ= ely, you can crea= te a new local entry. local-entriesfolder under the ESB Config proje= ct you specified, and the first local entry appears in the editor. After you create a local entry, you can reference it from a mediator in = your mediation workflow. For example, if you created a local entry with XSL= T code, you could add an XSLT mediator to the workflow and then reference t= he local entry as follows: A sequence is a tree of mediators that you can use in your mediation wor= kflow. You can create a sequence in your ESB Config project or in the regis= try and then add it right to that project's mediation workflow, or you can = refer to it from a sequence mediator in this project or another project in = this Eclipse workspace. This section describes how to create a new sequence or import an existing sequence from an XML file (such = as a Synapse Configuration file), and how to use the sequence in your mediation flow. Developer Studio allows you to create a Registry Resource project, which= can be used to store Resources and Collections you want to deploy to the r= egistry of a Carbon Server through a Composite Application (C-App) project.= When you create a sequence, you can save it as a dynamic sequence in the R= egistry Resource project and refer to that sequence from the mediation flow= . At runtime, when you deploy the CAR file with both the Registry Resource = project and mediation flow, WSO2 ESB looks up and uses the sequence from th= e registry. Follow these steps to create a new, reusable sequence that you can add t= o your mediation workflow or refer to from a sequence mediator, or to creat= e a sequence mediator and its underlying sequence all at once. Alternativel= y, you can import an e= xisting sequence. To create a reusable sequence: Type a unique name for the sequence. Creating a Main Sequence If you want to create the default main sequence, which just sends messag= es without mediation, be sure to name it main, which automatic= ally populates the sequence with the default in and out sequences. The sequence is now available in the Defined Sequences = section of the tool palette and ready for use. To create a sequence when creating a sequence mediator:= The mediation workflow is updated with the endpoints you added to the se= quence. The sequence is also now available in the Defined Sequences= section of the tool palette and ready for use in other meditation workflows. Follow these steps to import an existing sequence from an XML file (such= as a Synapse configuration file) into an ESB Config project. Alternatively= , you can create a new = sequence. After you create a sequence, it appears in the= Defined Sequences section of the tool palette. To use thi= s sequence in a mediation flow, click the sequence in the tool palette and = then click the spot on the canvas where you want the sequence to appear in = the flow. The editor automatically adds any endpoints you used in your sequ= ence. If you wa= nt to use a sequence from a different project or from the registry, you cre= ate a sequence mediator and then refer to the sequence as follows: The sequence mediator name and static reference key are updated to point= to the sequence you selected. A connector allows your message workflow to con= nect to and interact with an online service such as Twitter or Google Sprea= dsheets. Each connector provides operations that perform different actions = in that service. For example, the Twitter connector has operations for crea= ting a tweet, getting a user's followers, and more. There are over a hundre= d predefined ES= B connectors you can download from the WSO2 Connector Store, which connect t= he ESB to various APIs catering to your enterprise requirements such as, Bi= lling & Accounting, Communication, E-commerce, IoT, Sales & Marketi= ng, HR Management, etc. You can also create your own= connectors. Using WSO2 Developer Studio, you can develop configurations with connect= ors and deploy these configurations and connectors as Composite Application= aRchive (CAR) files into WSO2 Enterprise Service Bus. Follow the steps below to import connectors into WSO2 Developer Studio:<= /p> the connector store location and click Connect. Select th= e required connectors and click Finish. For complete information on each of the predefined connectors, see = ESB connectors.<= /p> Follow the steps below to remove connectors from WSO2 Developer Studio:<= /p> Follow the steps below to create a Composite Application aRchive (C= AR) file containing the connectors: Click Add Connector and then File Syst= em and select the connector files ( .zip) from th= e file system. Click Finish. Connector files download= ed from the connector store are stored in the $workspace/.metada= ta/.Connectors folder. = For more information, see Packaging Artifacts Into Deployable Archive= s. A proxy service is a virtual service that receives messages and optional= ly processes them before forwarding them to a service at a given endpoint. = It acts as a front end to the service, allowing you to define a mediation w= orkflow that performs whatever logic is needed on the message before sendin= g it to the actual service. You can crea= te a new proxy service or import an existing proxy service from an XML= file, such as a Synapse Configuration file. Follow these steps to create a proxy service. Alternatively, you can import an existing p= roxy service. the = URN is /pub/ADSL/, you would enter the URI. To ensure that = the URI is valid, click Test URI. You then enter the servi= ce name and port of the WSDL. Lastly, if you want to publish this WSDL, cli= ck Publish Same Service Contract. Follow these steps to import an existing proxy service from an XML file = (such as a Synapse configuration file) into an ESB Config project. Alternat= ively, you can = create a new proxy service. src/main/synapse-config/proxy-servicefol= der under the ESB Config project you specified, and the first proxy service= appears in the editor. A Synapse configuration file contains one or more ESB artifacts, such as= endpoints and sequences. You can import an existing Synapse configuration = to import all its ESB artifacts in one step. Follow these steps to import an existing Synapse configuration into an E= SB Config project. src/main/synapse-configfolder under = the ESB Config project you specified, and the first artifact appears in the= editor. A REST API allows you to configure REST endpoi= nts in the ESB by directly specifying HTTP verbs (such as POST and GET), UR= I templates, and URL mappings through an API. You can create a new REST API, or you can import an existing REST API&= nbsp;from the file system. Follow these steps to create a REST API. Alternatively, you can import an existing REST API= a>. Follow these steps to import an existing REST API into an ESB Config pro= ject. Alternatively, you can create a new REST API. src/main/synapse-config/apifolder und= er the ESB Config project you specified, and the first API appears in the e= ditor. A scheduled task allows you to run a piece of = code triggered by a timer. You can create a new task, or you can import an existing task from the file system= . Follow these steps to create a new scheduled task. Alternatively, you ca= n import an existing task= . src/main/synapse-config/tasksfolder under the ESB Config pr= oject you specified. When prompted, you can open the file in the editor, or= you can right-click the task in the project explorer and click Follow these steps to import an existing scheduled task into an ESB Conf= ig project. Alternatively, you can create a new task. src/main/synapse-config/tasksfolder = under the ESB Config project you specified, and the first task appears in t= he editor. A message store is used to temporarily hold me= ssages before they are delivered to their destination by a message processo= r, allowing you to control the flow of messages and implement different mes= saging and integration patterns. You can create a new message store, or you can import an existing message stor= e from the file system. Follow these steps to create a new message store. Alternatively, you can= import an existi= ng message store. src/main/synapse-config/message-storesfolder under the ESB C= onfig project you specified and appears in the editor. You can click its ic= on in the editor to view its properties. Follow these steps to import an existing message store into an ESB Confi= g project. Alternatively, you can create a new message store. src/main/synapse-config/message-storesfolder under the ESB C= onfig project you specified and appears in the editor. A message processor is used to deliver message= s that have been held in a message store. You can create a new message processor, or you = can import an= existing message processor from the file system. Be sure to create the message store before creating the message processo= r, as you will need to specify the message store that this processor applie= s to. Follow these steps to create a new message processor. Alternatively, you= can import a= n existing message processor. src/main/synapse-config/message-processorsfolder under t= he ESB Config project you specified and appears in the editor. You can clic= k its icon in the editor to view its properties. Follow these steps to import an existing message processor into an ESB C= onfig project. Alternatively, you can create a new message processor. src/main/synapse-config/message-processorsfolder under t= he ESB Config project you specified and appears in the editor. A template is a reusable, parameterized config= uration that you can reuse when creating endpoints and sequences. You can <= a href=3D"#CreatingESBArtifacts-CreateTemplate">create a new template, = or you can import an e= xisting template from the file system. Follow these steps to create a new scheduled task. Alternatively, you ca= n import an existing = template. Follow these steps to import an existing template into an ESB Config pro= ject. Alternatively, you can create a new template. src/main/synapse-config/templates= folder under the ESB Config project you specified, and the first template = appears in the editor. A mediator project allows you to create a custom mediator that performs = some logic on a message. You can create a new mediator project, or you can import an existing med= iator project from the workspace. Once the mediator project is finalised, you can export it as a deployabl= e artifact by right-clicking on the project and selecting Expo= rt Project as Deployable Archive. This creates a JAR file that you= can deploy to a WSO2 ESB. Alternatively, you can group the mediator projec= t as a Composite Application Project, create a Composite Application Archiv= e (CAR), and deploy it to the ESB. From ESB 4.8.1 onwards, a URL classloader is used to load classes in the= mediator (class mediators are not deployed as OSGi bundles). Therefore, it= is only possible to refer to the class mediator from artifacts packed in t= he same CAR file in which the class mediator is packed. Accessing the class= mediator from an artifact packed in another CAR file is not possible. Howe= ver, it is possible to refer to the class mediator from a sequence packed i= n the same CAR file and call that sequence from any other artifact packed i= n other CAR files. Follow these steps to create a new mediator. Alternatively, you can import a mediator p= roject. The mediator project is created in the workspace location you specified = with a new mediator class that extends org.apache.synapse.mediators.A= bstractMediator. Follow the steps below to import a Java mediator project (that includes = a Java class, which extends the org.apache.synapse.mediators.Abs= tractMediator class) to WSO2 Developer Studio. org.apache.synapse.= mediators.AbstractMediatorare listed. Optionally, you can chan= ge the location where the mediator project will be created and add it to wo= rking sets. The mediator project you selected is created in the location you specifi= ed. The mediator projects you create using WOS2 Developer Studio are of= the org.wso2.developerstudio.eclipse.artifact.mediator.project.= nature nature by default. Follow the steps below to view this n= ature added to the <PROJECT_NAME>/target/.pro= ject file of the Java mediator project you importe= d. For information on importing a mediator project, which you created using WSO2 Developer Studio, see Importing Existing Projects.= p> Smooks is an extensible framework for building appl= ications that process data, such as binding data objects and transforming d= ata. To create a Smooks configuration artifact, you must first create a reg= istry resource as described in Creating Governance Registry Artifacts. When cre= ating the registry resource, select the From Existing Template option and select Smooks Configuration as the templat= e. The smooksconfig.xml file is created. Double-click it in th= e Project Explorer to open it in the embedded JBoss Smooks editor.  = ; Click Input Task, create the da= ta mapping, and save the configuration file. For more information on mappin= g data using Smooks in Eclipse, see the Smooks documenta= tion. Before you can run the Smooks configuration, you must add libraries from= the Smooks framework to your registry resources project. All the Smooks-related libraries have been added to the project classpat= h. You can now run the Smooks configuration file by right-clicking the file= and choosing Run As > Smooks Run Configuration. If you= r Smooks configuration is correct, the console displays the results accordi= ng to the input model and output model you specified. You can now add the Smooks configuration artifact to a proxy service or = sequence to use it in the ESB. To do so, create a proxy service. Drag and drop a Log mediator= and a Smooks mediator to the InSequence. Double= -click on the Smooks mediator to see the Property view. Click the button at the right hand corne= r of the Configuration Key field. The Resource Key Editor dialog box appears. Specify fro= m where you should select the resource file. Select Workspace option since we have the created Now you have successfully referred to the Smooks configuration within yo= ur proxy service. To move an ESB artifact from one project to another, right-click on the artifact (in this instance, = Sequence123.xml), click Move and then = select the folder to which you want to move the artifact. It is also possib= le to simply drag and drop the artifact.  = ; Once you create all the ESB components such as sequences, proxy services= , endpoints, local-entries, you can create a Composite Application Project = to group them and create a Composite Application Archive. Before creat= ing the CAR file, make sure to change the server= role of the created bundle to EnterpriseServiceBus.&= nbsp;To deploy it to WSO2 Enterprise Service Bus, you need to change it to = the correct server role. For more information, see Packaging Artifacts Into Deployable Archives= .
https://docs.wso2.com/exportword?pageId=45970440
2020-05-25T06:03:15
CC-MAIN-2020-24
1590347387219.0
[]
docs.wso2.com
View source for Working with CHOPs in Python You do not have permission to edit this page, for the following reason: The action you have requested is limited to users in the group: administators. You can view and copy the source of this page. Return to Working with CHOPs in Python.
https://docs.derivative.ca/index.php?title=Working_with_CHOPs_in_Python&action=edit&section=1
2020-05-25T05:20:15
CC-MAIN-2020-24
1590347387219.0
[]
docs.derivative.ca
Setting Up a Printer in Tails¶ Because Tails is supposed to be as amnesiac as possible, you want to shield your Tails stick from any extra inputs from, and outputs to, a potentially untrusted network. This is why we strongly recommend using a printer that does not have WiFi or Bluetooth, and connecting to it using a regular USB cable to print. Finding a printer that works with Tails can be challenging because Tails is based on the Linux operating system, which often has second-class hardware support in comparison to operating systems such as Windows or macOS. We maintain a list of printers that we have personally tested and gotten to work with Tails, in the Hardware guide; if possible, we recommend using one of those printers. The Linux Foundation also maintains the OpenPrinting database, which documents the compatibility, or lack thereof, of numerous printers from almost every manufacturer. Note The latest generations of printers might or might not be represented by the OpenPrinting database; also, the database does not document whether or not a printer is wireless, so this will involve manually checking models of interest, if you wish to use this resource as a guide for purchasing a non-wireless printer suitable for use with SecureDrop. With that in mind, this database is arguably the best resource for researching the compatibility of printers with Linux. As a tip for narrowing down your search, look for printers that are compatible with Debian, or Debian-based distributions like Ubuntu, since Tails itself is also Debian-based. This might increase the chances for a seamless installation experience in Tails. In any case, this document outlines the usual set of steps that we follow when attempting to use a new printer with Tails. Installing and Printing via the Tails GUI¶ Let’s look at the flow in Tails 4 for installing a USB-connected printer. On the Tails welcome screen, unlock your persistent volume, and set an admin password. This ensures that you won’t have to reinstall the printer each time you start Tails. Connect the printer to your Tails-booted computer via USB, then turn the printer on. Now, you’ll want to single-click your way through Applications ▸ System Tools ▸ Settings, then select Devices ▸ Printers. The screenshot below highlights the “Devices” section in which the printer settings can be found: If this is the first time you’ve tried to install a printer, the “Printers” section will look like this: Click Add a Printer. After a brief period during which Tails searches for printers, you should see a list of printers that Tails has auto-detected: In this example, we’ve connected an HP ENVY-5530 (not a model we recommend for production use). Clicking on this printer will select it for installation. The installation can take a few seconds, during which it looks like nothing is happening. Assuming you receive no errors in this process, you will then see a screen like the following one, which indicates that the printer is ready for printing. Printing from the Command Line¶ After you have configured your printer, you can also easily print from the command line using the lp command. If you haven’t already set your installed printer as default in the GUI, you can quickly do so by adding this line to your ~/.bashrc file, or entering this directly into the terminal: export PRINTER=Printer-Name-Here If you need to find the name of the printer, you can use lpstat to get a list of installed printers, as such: lpstat -a Once you’ve set your default printer, you can easily print from the terminal by using the following syntax: lp filename.extension While printing from the GUI is much easier, once you’ve got everything set up, it’s equally straightforward from the command line, if you prefer that environment.
https://docs.securedrop.org/en/master/tails_printing_guide.html
2020-05-25T05:35:19
CC-MAIN-2020-24
1590347387219.0
[array(['_images/select_devices_from_settings.png', 'select devices from settings'], dtype=object) array(['_images/add_printer.png', 'add printer'], dtype=object) array(['_images/select_printer_to_add.png', 'select printer to add'], dtype=object) array(['_images/printer_ready.png', 'printer ready'], dtype=object)]
docs.securedrop.org
Web Browser Support The Telerik UI for ASP.NET Core helpers and framework components are designed to support all major web browsers and deliver a cross-browser compatibility, standards compliance, or touch-device support. However, depending on the specifics of your project, the components you use, and the versions of the helpers, their browser support may vary. Regular Support Most Telerik UI for ASP.NET Core helpers have no specific limitations as of the browser versions they support. Support for Data Visualizing Helpers The Telerik UI for ASP.NET Core helpers which render data visualization, such as the Charts, Gauges, Barcodes, Diagrams, and Maps, may require more recent browser versions according to the following table. Fully supported browsers Browsers with limited support Support for PDF Export The Telerik PDF generator is tested and supported in the following desktop browsers: - Internet Explorer 9 and later. - Latest Chrome, Firefox, Safari, and Blink-based Opera versions. Internet Explorer 9 and Safari do not support the PDF-related. Even though exporting in PDF might work on some mobile devices in specific scenarios, PDF export is not supported in mobile browsers and hybrid mobile applications. Best Performance To boost the performance of your project: - Always use an up-to-date browser version. - Check Disable Script Debugging from your browser configuration options. - Activate Caching in Internet Explorer. Notes on Web Browser Support - As of the Telerik UI for ASP.NET Core 2017 R1 release, Internet Explorer 8 is no longer supported. - Browsers in beta stage are not supported. - Zoomed-in pages are not supported. - Zoomed-out pages are not supported. Different browsers handle sub-pixel calculations differently and zooming out the page may lead to unexpected behavior—for example, missing borders. - Exporting a zoomed-in or zoomed-out page to PDF is not supported. Quirks mode is not supported.. It is highly advisable to use Internet Explorer Edge mode over a META tag or an HTTP header: <meta http- Telerik UI for ASP.NET Core uses a progressive enhancement for its CSS styling. As a result, old and obsolete browsers may ignore CSS 3 styles such as rounded corners and linear gradients.
https://docs.telerik.com/aspnet-core/compatibility/browser-support
2020-05-25T05:28:55
CC-MAIN-2020-24
1590347387219.0
[]
docs.telerik.com
Genesys Inbound From Genesys Documentation This topic is part of the manual PureCloud Use Cases for version Public of Genesys Use Cases. Genesys Inbound Use Cases for PureCloud Sort or search the table to find the use case you need to edit. Click the title link to go to the use case.
https://all.docs.genesys.com/UseCases/Public/PureCloud/InboundUseCases
2020-05-25T05:11:46
CC-MAIN-2020-24
1590347387219.0
[array(['/images-supersite/thumb/9/96/Icon-pure-cloud-2x-126x76.png/51px-Icon-pure-cloud-2x-126x76.png', 'PureCloud PureCloud'], dtype=object) ]
all.docs.genesys.com
Tutorial This tutorial takes you through the steps required to create a simple, single-client, single-server distributed application from an existing standalone application. These steps are: - Create interface definition and application configuration files. - Use the MIDL compiler to generate C-language client and server stubs and headers from those files. - Write a client application that manages its connection to the server. - Write a server application that contains the actual remote procedures. - Compile and link these files to the RPC run-time library to produce the distributed application. The client application passes a character string to the server in a remote procedure call, and the server prints the string "Hello, World" to its standard output. The complete source files for this example application, with additional code to handle command-line input and to output various status messages to the user, are in the RPC\Hello directory of the Platform Software Development Kit (SDK). This section presents its discussion in the following topics: - The Standalone Application - Defining the Interface - Generating the UUID - The IDL File - The ACF File - Generating the Stub Files - The Client Application - The Server Application - Stopping the Server Application - Compiling and Linking - Running the Application
https://docs.microsoft.com/en-us/windows/win32/rpc/tutorial
2020-05-25T06:22:21
CC-MAIN-2020-24
1590347387219.0
[]
docs.microsoft.com
Creating Connections using the Microsoft Office 365 (Deprecated) Connector Use the following procedure to create a connection to Microsoft Office 365 on an inventory beacon using the legacy Microsoft Office 365 (deprecated) connector. A separate connection is required for each tenant of Microsoft Office 365. (Typically, there is one tenant per enterprise; but your corporate history, particularly of mergers and acquisitions, may mean that your enterprise has multiple Microsoft tenants.) The inventory beacon requires this connection to import entitlements, users, and usage information from the Microsoft Office 365 online account. Each per-tenant import covers all subscriptions for that tenant. To create a connection to Microsoft Office 365: - Ensure that you have your preferred schedule for imports from Microsoft Office 365 set on the appropriate inventory beacon: -. - If there is not already a suitable schedule in the list, click New... and complete the details (see Creating a Data Gathering Schedule for more information). Otherwise, identify the schedule you will use. - Select the Inventory Systems page (in the same navigation group). -: Note: If you have not installed the FlexNet Beacon released with FlexNet Manager Suite 2019 R1 or later, then Microsoft Office 365 (deprecated) will not appear as a Source Type connection. Instead, Microsoft Office 365 will appear as the legacy connector source type. - Connection Name: The name of the inventory connection. When the data import through this connection is executed, the data import task name is same as the connection name. - Source Type: Select Microsoft Office 365 (deprecated) from this list. - Optionally, if your enterprise uses a proxy server to enable Internet access, complete (or modify) the values in the Proxy Settings section of the dialog box in order to configure the proxy server connection. - Use Proxy: Select this checkbox if your enterprise uses a proxy server to enable Internet access. Complete the additional fields in the Proxy Settings section, as needed. If the Use Proxy checkbox is not selected, the remaining fields in the Proxy Settings section are disabled. - Proxy Server: Enter the address of the proxy server using HTTP, HTTPS, or an IP address. Use the format,, or IPAddress:PortNumber). This field is enabled when the Use Proxy checkbox is selected. - Username and Password: If your enterprise is using an authenticated proxy, specify the username and password of an account that has credentials to access the proxy server that is specified in the Proxy Server field. These fields are enabled when the Use Proxy checkbox is selected. - Complete (or modify) the values in the Microsoft Office 365 section of the dialog. All of the following values are required:Note: If you have multiple tenants within Microsoft Office 365 (for example, separate subscriptions for different corporate units or locations), you need to create a separate connector for each tenant using its own credentials. Within each tenant, a single connection and a single import recovers data for all your subscriptions (if you have multiple subscriptions). - Username: Microsoft Office 365 tenant user name. - Password: Microsoft Office 365 tenant password. - Save the connection. - Select your new connection from the displayed list, and click Schedule.... - In the dialog that appears, select the name of your chosen schedule for inventory collection through this connection, and click OK. - At the bottom of the FlexNet Beacon interface, click Save, and if you are done, also click Exit.Tip: Consider whether you want to select your connection, and click Execute Now, before you exit.
https://docs.flexera.com/fnms2019r1/EN/WebHelp/tasks/ManageOffice365Connection_OLH.html
2020-05-25T04:09:48
CC-MAIN-2020-24
1590347387219.0
[]
docs.flexera.com
LinkedIn Ads On This Page LinkedIn Ads enable you to display sponsored content in the LinkedIn feed of professionals you want to reach, by way of single image ads, video ads, and carousel ads. With LinkedIn Ads, you can target your most valuable audiences using accurate, profile-based first-party data. You can use Hevo Pipelines to replicate your LinkedIn reports to the desired Destination database or data warehouses LinkedIn account with access to at least one advertiser profile. Adminaccess to the organization’s LinkedIn page. Note: You can create a Pipeline without Adminaccess too. However, you will not be able to access certain objects such as Page statistics, Follower statistics, Share statisticsand Video Ads. Configuring LinkedIn Ads as a Source Perform the following steps to configure LinkedIn Ads as the Source in your Pipeline: Click PIPELINES in the Asset Palette. Click + CREATE in the Pipelines List View. In the Select Source Type page, select LinkedIn Ads. In the Configure your LinkedIn Ads Account page, click ADD LINKEDIN ADS ACCOUNT. Provide credentials for your LinkedIn account which has access to at least one advertiser profile, and click Sign In. Click Allow to authorize Hevo to access your advertiser profile to read reporting data. In the Configure your LinkedIn Ads Source page, specify the following: Pipeline Name: A unique name for the Pipeline. Select Accounts: Select the advertiser profiles for replicating the reports data. Historical Sync Duration: The duration for which the past data must be ingested. Click TEST & CONTINUE. Proceed to configuring the data ingestion and setting up the Destination. Data Replication Note: The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75. Historical Data: Hevo ingests the historical data for ad_analyticsand statsobjects on the basis of the historical sync duration selected at the time of creating the Pipeline. See Limitations. Incremental Data: Once the historical data ingestion is complete, every subsequent run of the Pipeline fetches the entire data for the objects you select. All objects apart from ad_analyticsand statsare loaded from scratch on every run. Data Refresh: Data for the last 30 days is refreshed on a rolling basis for ad_analyticsobjects. Data for statsobjects is refreshed for the past two days to include attributed data for the past days. Schema and Primary Keys Hevo uses the following schema to upload the records in the Destination database: Data Model The following is the list of tables (objects) that are created at the Destination when you run the Pipeline: Limitations All available datawill fetch data up to 6 months for Ad Analytics by Campaignand Ad Analytics by Creativeand up to one year for Page statistics, Follower statisticsand View statisticsdue to limits enforced by LinkedIn. Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/sources/mkt-analytics/linkedin-ads/
2022-09-25T02:51:15
CC-MAIN-2022-40
1664030334332.96
[]
docs.hevodata.com
The Concept TDR Kotelnikov GE is a wideband dynamics processor combining high fidelity dynamic range control with deep musical flexibility. As a descendant of the venerable TDR Feedback Compressor product family, Kotelnikov GE has directly inherited several unique features such as a proven control scheme, individual release control for peak and RMS content, an intuitive user interface, and powerful, state of the art, high-precision algorithms. With a sonic signature best described as “stealthy”, Kotelnikov GE has the ability to manipulate the dynamic range by dramatic amounts, while carefully preserving the original tone, timbre and punch of a musical signal. As such, it is perfectly suited to stereo bus compression as well as other critical applications. Kotelnikov GE does not try to emulate any previously available device. The concept is a proud digital processor, it is the original! Notable parallel bypass (i.e. processing not interrupted) - Extensive documentation Precision, Aliasing and the Solution Controlling the dynamic range of an audio signal in the digital domain is not as easy as it looks. A whole array of problems makes it very difficult to build a truly effective and musically attractive compressor in the digital domain. The most significant restrictions are due to the discrete (i.e. “stepped”) nature of digital signal storage. Without delving too deep into the mathematics, it is essential to understand that the data stored in audio-files is not the actual analogue audio signal. It is a very economic, intermediate format. The true analogue signal is only reconstructed by the anti-alias filter inside the D/A converter. However, if one wants to control the amplitude of music accurately, it is absolutely essential to know the actual waveform (i.e. it needs to know about the values between samples as well). Another huge problem in digital dynamics control is a phenomenon called aliasing or Moiré images. Aliasing appears in all discrete (i.e. “digital”) systems as soon the frequency of a signal exceeds the Nyquist rate (half the sample-rate). The evil detail here is that contrary to analogue systems, signals exceeding the bandwidth aren’t gradually “faded out”, instead, they mirror at the Nyquist frequency and overlap into the audible range at full energy, and in this process, lose any harmonic relation to the original signal. Compressors are non-linear systems. All non-linear systems add harmonic (or non-harmonic) content to the processed signal and thus have the potential to extend the bandwidth significantly. The more aggressive the non-linearity, the stronger and higher the newly generated partials (harmonics). If the sample-rate is too low to handle the extended bandwidth, these harmonics will alias and lose their harmonic relation to the fundamental, resulting in inaccurate, “unwanted” behavior, which in turn directly produces an unpleasant sound. Kotelnikov GE’s algorithm was carefully designed to avoid these issues. To achieve this in an efficient manner, the algorithm is split in two parts, both typically running at higher rates than the original signal (given a standard rate such as 44.1kHz or 48kHz). - Sidechain The sidechain is responsible for generating a control signal for the gain cell. It consists of several non-linear elements such as the threshold, RMS detection and timing filters. To avoid aliasing in the control signal (which would severely limit the accuracy of the compression), the sidechain uses a bandwidth up to 20x wider than the audible bandwidth. - Gain Cell The gain cell is a central part of the compressor where the audio signal’s level is adjusted by the control signal delivered by the sidechain. This operation is a non-linear process and potentially doubles the bandwidth. To handle this bandwidth, this multiplication always runs at a minimum of double the audio sample rate (i.e. at least 88.2kHz). One interesting design feature of Kotelnikov GE is that only the compressed portion of the signal (listen via “Delta” switch) is oversampled. This leaves the original signal completely untouched – as long as no gain reduction is occurring, the plugin is 100% bit transparent. In both cases, the resampling is done with very high quality linear-phase time-domain convolution. Please note that these quality improvements come at a price – CPU cycles – but from a sonic point of view, and in the context of the fast growth of computer power, we think they are well worth the compromise. Audio Dynamics Control Beside the more or less obvious technical issues illustrated in the previous chapter, generations of audio engineers suffered from the fact that the dynamic behavior of music signals can be very complex. Controlling these dynamics, be it overload protection, loudness balancing or creative shaping is hard to achieve without introducing negative side-effects (audible collateral damage). The central problem being the processor’s unawareness of musical criteria, which in turn enforces its own sound (and intellectual restrictions) onto the signal. In the best case, this common phenomenon is referred to as a compressor’s “color” and sometimes can be of great creative use. However, it also means that the processor alters the original fidelity in one way or another. There are times when the audio engineer faces a dynamic challenge, an error that needs correction, but doesn’t want to accept any additional color being brushed all over the entire signal. This is particularly important to the mastering engineer, especially in the case of high quality mixes, but also during creative shaping of natural instruments in a mix. Most offers on the market are based on rather crude attack/release timing concepts, sometimes combined with more or less intelligent automatisms. In fact, they aren’t much more elaborate than simple thermostats. However, dynamics in music usually do not only consist of attack and decay phases, they are far more complicated. In order to control dynamics in a both effective and transparent manner, it is essential that the processor has the ability to analyze the signal in a musically sensible manner. It’s no wonder that dynamics control systems purely based on A/R models are much better suited to relatively simple signals such as monophonic instruments or other sources already having predictable dynamics, and typically do more harm than good once they face more complex material and/or processing tasks. Kotelnikov GE uses an elaborate control system based on two specialized parallel paths, each handling specific signal phases independently with high technical effectivity and musical grace. The compressor runs two distinct and separate detection and compression algorithms in parallel: - Peak Path The peak detection/compression path primarily follows the rectified version of the (true!) audio waveform with great precision. The peak path’s release phase can be tuned via the PEAK RELEASE control, but also automatically accelerates the release time in certain, high dynamic range situations. This avoids over-compression of small, “crispy” transient events. This form of over-compression is highly audible, as it dulls the original signal substantially and can creates a “nervous” pumping side-effect. - RMS Path The RMS path reacts to overall energy levels (similar to human hearing) and ignores short transients. It can be seen as a very slow and smooth compressor that handles the constant or ‘steady-state’ parts of the signal. To be honest, Kotelnikov GE’s RMS path doesn’t use a true RMS detection in the “text-book” sense. Instead, it relies on a highly advanced derivate of the regular RMS detector based on a precise Hilbert Transform detector. The result is a frequency-independent RMS detector, practically free of any harmonic distortion. Contrary to conventional RMS detectors which handle low frequencies much like peak detectors and only higher frequencies as true RMS, Kotelnikov GE’s RMS stage automatically uses the optimal RMS window size for every frequency. Additionally, Kotelnikov GE’s RMS path features an advanced release gating (“freezing”) mechanism preventing noise floor build-ups during programme pauses as well as other forms of pumping. It activates as soon the RMS section detects a sudden pause in the input signal. To summarize, in contrast to the peak stage, the RMS stage’s own dynamic behavior has the ability to lengthen the RMS release time as required by given audio content. Kotelnikov GE can even increase RMS release to (almost) infinity during certain situations. The section generating the highest reduction at a specific point in time takes control of the whole compression action. Technically speaking, we have 2 cross-linked side-chains running in parallel, whereby the peak section mostly controls short, high amplitude events and the RMS section takes control over sections having “steady state dynamics”. This contradiction is perhaps better explained as “Sections without substantial dynamics”. This unique approach offers all of the advantages of peak compression combined with all of the advantages of RMS compression. Furthermore, the RMS path negates the disadvantages of the peak path and vice versa. This design guarantees a very stable stereo image and low compression artifacts over a wide range of material and settings. The examples below roughly illustrate the general idea: Input AC Input DC (Absolute Input) Peak Section RMS Section Peak + RMS Combined Controls and Displays Kotelnikov GE offers deep access to various musically relevant parameters. Most of them will be familiar to the experienced audio engineer. A few less common parameters however, deserve special attention. The following section addresses the purpose, meaning and internal mechanisms of the user-interface. In order to improve UI navigation, the user interface is structured in logical sections: - Gain Control Parameters - Toolbar and Generic Settings - Output Section - Monitoring Section - Dynamics Metering - Advanced Compression Parameters - Timing Section - Detection Preprocessing Detection Pre-processing Low Frequency Relaxation Stereo Sensitivity Gain Control Threshold Peak Crest Soft Knee GR Limit Ratio Timing Section Attack Release LEDs Release Peak Release RMS More on the Timing Section Dual release paths offer a wide range of musically useful options. For example; single drum peaks can be allowed to recover quickly to avoid “dulling” and “breathing” side-effects. At the same time, sustained content like bass-lines or synth pads can recover more slowly and thus strongly reduce typical side-effects like “pumping” and distortion. For complex material such as full mixes we recommend a RELEASE PEAK value between 25-100ms and RELEASE RMS values above 150ms. Faster settings are useful for particularly dynamic material like solo drums, percussion and acapellas. Use PEAK CREST to control the overall aggressiveness of the process. Input characteristics and the selected PEAK CREST setting have a substantial effect on the internal peak/RMS release path switching. Because the peak path is much more sensitive than the RMS path, setting the RELEASE PEAK to a higher value than RELEASE RMS while keeping the PEAK CREST low effectively disables the RMS part of the compressor. Advanced Compressor Parameters Yin/Yang Intertia -Intertia (Inverted) Frequency Dependent Ratio SHAPE offers 5 different options: - Shelf A: A wide shelf - Shelf B: A steep shelf - Bell A: A wide bell - Bell B: A narrow bell - EL Curve: A curve following the equal loudness contours as a function of playback level. FREQ defines the center/cutoff frequency of the selected shape (inactive for the EL Curve option). AMT controls the amount of “ratio” being reduced at the frequency (region) of interest. The parameter ranges from -100% over 0% (no change) to 100%. Output Section A Note on Auto Gain Several attempts have been made to solve the problem of auto-gain in compressors.. Most of them rely on rather simplistic approaches. These types of threshold/ratio based auto-gain calculations typically ignore implications given by the timing parameters. This leads to severe mismatches between the estimated auto-gain and the truly required make-up gain. We think that such approaches create more problems than they solve. However, another closely related problem is equal loudness monitoring, which is seen as essential for an audio engineer to be able qualify a compressor’s action. A higher loudness of only 1dB usually seems to sound better, even if it contains weird saturation or pumping). Kotelnikov GE uses an elaborate equal loudness detection system which supports two separate equal loudness oriented workflows: - Equal loudness trim button (semi-automatic, explicit) Semi-automatic trimming of output gain according to in/out loudness difference. - Equal loudness bypass (automatic, implicit) Automatic adjustment of the bypass signal according to in/out loudness difference, whereby loudness estimation freezes during active bypass state. We found these two workflows to be the least obtrusive and most effective solution without restricting more traditional manual makeup gain workflows. The equal loudness detection system needs some time to analyze the current situation which is indicated by the EQUAL LOUDNESS LED. (See below for more details.) Makeup Dry Mix Output Gain Equal Loudness Trim Equal Loudness Estimation LED This state directly affects the reliability of both EQUAL LOUDNESS TRIM and EQUAL LOUDNESS BYPASS: - A flashing red light means that the Equal Loudness estimation module is currently “learning”. It indicates that the current loudness estimation is most probably not correct, i.e. Equal Loudness Trim and Equal Loudness Bypass will not work as expected. - A green light indicates a stable loudness estimation. Equal Loudness Trim and Equal Loudness Bypass now work as expected. - A blue light indicates that the input loudness now closely matches the output loudness. - A white color indicates a “frozen” state (active only during EQUAL LOUDNESS BYPASS). Monitoring Section Delta Equal Loudness Gain Control Metering Toolbar Undo/Redo Preset Management A/B Control Process-Quality Process-Target Sidechain Input VST 2 / REAPER - In REAPER, send any “key” signal to the source track’s channel 3 and 4. - Select “EXT SC” in Kotelnikov GE’s toolbar. Note: In order to work, plugin host must support multi-input plugins. Sadly, only few s support multiple plugin inputs. Reaper is probably the most popular example. AU / Logic - In Logic’s plugin window, assign a bus to operate as a sidechain signal. - Select “EXT SC” in Kotelnikov GE’s toolbar. AAX / Pro Tools - In Pro Tools’ plugin window, assign a bus to operate as a sidechain signal. - Select “EXT SC” in Kotelnikov GE Limiter 6 GE. Updates allows to Check for updates and to Download latest version. Automatic Lookups can be enabled to Check for updates (once per day). Help contains Documentation and Support links. About shows the version number, build date, format, credits, and other information. Context Menu Standard Context Menu Appendix A: Getting started with Kotelnikov GE While all these advanced concepts of Kotelnikov GE might appear a bit intimidating at first, special care has been taken to make it as easy to use as classic compressor – you’ll benefit from the sonic goodies right away, and can learn how to fine tune details later on, while already using Kotelnikov GE in your mixes. Please load the default preset. We’ll focus on these three controls for now: Play a simple signal, say an acoustic drum loop. Now move the THRESHOLD knob clockwise until the GAIN REDUCTION METER flashes up during loud passages. Notice how the signal gets more dense, stronger and punchy without actually getting louder. In simple terms, you can think of THRESHOLD as the level at which of compression starts working. RATIO controls the strength of compression – a RATIO of 1.0:1 means there is no compression at all. The range between 1.5:1 and 3.0:1 is a good universal starting point. The GAIN REDUCTION METER will give you visual feedback of the processing, and you can use the MAKEUP knob to recover the level lost during the compression process. After compensating the level via MAKEUP, you should be hearing a satisfying increase in density. Next we’ll look at the ATTACK parameter – it controls how fast the compressor reacts to content surpassing the THRESHOLD. Short attack times mean that compressor will respond very quickly, which in turn help to tame fast events such as drums, percussion and similar. This however is not always beneficial. Longer attack times on the other hand tend to accentuate the attack phase of your signal, often adding a sweet “snappy” character to the original signal. The release knobs work similar way to other compressors at first glance, the main difference being that Kotelnikov GE offers separate control over fast, peaky effects (often described as transients) and the average RMS “body” of the signal. Now try Kotelnikov GE on various signals, channels as well as groups or the stereo bus, and get familiar with these 6 central controls highlighted in the image below. Use your ears for now, and make sure to read up the detailed explanation of Kotelnikov GE’s unique timing technology – fascinating, and an amazingly powerful dynamics tool once you get a handle on the “expert parameters”. To wrap things up, here’s a short glimpse at the remaining, more advanced features, just so you know where to look in case you need deeper control over certain aspects – please refer to the related manual sections for specifics! This panel on the left gives you control over what part of your signal Kotelnikov GE listens to the most, helping you avoid common problems related to compression. The top section called LOW FREQUENCY RELAX basically tells Kotelnikov GE to ignore low frequencies to a certain degree when adjusting its compression – you don’t want the big, bad kick to duck that fragile little flute. Likewise, the bottom section named STEREO SENSITITVITY helps you focus Kotelnikov GE’s attention on certain parts of the stereo panorama. Setting DIFF to 0% has the tendency to widen stereo content during compression. The deeper the gain reduction, the wider the sound becomes. Important: These two specialized sidechain pre-processors do not change your signal directly; they only fine tune the compression character. PEAK CREST adjusts the relationship between peak- and RMS (average) compression, favoring one or the other. This parameter allows manipulating the compression behavior in a dramatic manner. A SOFT KNEE of zero means the compression kicks in immediately once a signal passes the threshold like flicking a switch, as opposed to a higher value, which gives you gradual onset of compression. The right hand section offers various means of level compensation, as well as monitoring helpers such as the BYPASS switch and the (purely informative) DELTA function. Last but certainly not least, the FDR section is worth some attention. This is a totally unique way of getting different ratios depending on the signal’s frequency. The mysterious Yin and Yang buttons add subtle compression dependent saturation to low- and high frequencies respectively. That leaves the top bar, a common feature in all TDR plugins since Slick EQ – here’s where you find the convenience functions to make your life easier, like AB comparison, undo and preset management. It is worth noting that you can save chose different processing modes for Kotelnikov GE. The default mode “Stereo” works like any other stereo processor. However, you can force mono processing at this point (“Mono”) or even process either the Stereo Sum (representing “Mid” in Mid/Side microphone array) or the Stereo Difference (equivalent to “Side” in Mid/Side microphone array). Please refer to the main section of the manual for precise, in-depth explanations of all the innovative features above. Take your time, have fun exploring this mighty tool and know that the advanced parameters are preset to sensible values by default and are simply waiting for those special situations when all you need is a little “more”. Appendix B: Vladimir Aleksandrovich Kotelnikov Vladimir Aleksandrovich Kotelnikov (Russian Владимир Александрович Котельников, scientific transliteration Vladimir Alexandrovič Kotelnikov, 6 September 1908 in Kazan – 11 February 2005 in Moscow) was an information theory and radar astronomy pioneer from the Soviet Union. He was elected a member of the Russian Academy of Science, in the Department of Technical Science (radio technology) in 1953. From 30 July 1973 to 25 March 1980 Kotelnikov served as Chairman of the RSFSR Supreme Soviet. He is mostly known for having discovered the sampling theorem in 1933, independently of others (e.g. Edmund Whittaker, Harry Nyquist, Claude Shannon). This result of Fourier Analysis was known in harmonic analysis since the end of the 19th century and circulated in the 1920s and 1930s in the engineering community. He was the first to write down a precise statement of this theorem in relation to signal transmission. He was also a pioneer in the use of signal theory in modulation and communications. He is also a creator of the theory of optimum noise immunity. He obtained several scientific prizes for his work in radio astronomy and signal theory. In 1961, he oversaw one of the first efforts to probe the planet Venus with radar. In June 1962 he led the first probe of the planet Mercury with radar. Kotelnikov was also involved in cryptography, proving the absolute security of the one-time pad; his results were delivered in 1941, the time of Nazi Germany’s invasion of the Soviet Union, in a report that apparently remains classified to today. In this, as with the above-mentioned sampling theorem, he and Claude Shannon in the US reached the same conclusions independently of each other. For his achievements Kotelnikov was awarded the IEEE 2000 Gold Medal of Alexander Graham Bell and the honorable IEEE Third Millennium Medal. Prof. Bruce Eisenstein, the President of the IEEE, described Kotelnikov as ”The outstanding hero of the present. His merits are recognized all over the world. In front of us is the giant of radio engineering thought, who has made the most significant contribution to media communication development” – [Extract from Wikipedia] A note from the developers: The decision to call this compressor “Kotelnikov” is primarily a tribute to a person deeply involved in the most important technological developments of the 21st century. It also nicely reflects the “no-nonsense” scientific claim of this product as well as its deep respect for sampling theorem. Last but not least, we hope the name is as memorable as we’re supposing.
https://docs.tokyodawn.net/kotelnikov-ge-manual/
2022-09-25T01:42:16
CC-MAIN-2022-40
1664030334332.96
[array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_full_ui.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_input_ac_waveform-1.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_input_dc_waveform.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_peak_section_waveform.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_rms_section_waveform.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_peak_rms_combined_waveform.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_contrals_and_displays-1024x489.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_yin_yang_graph_01.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_yin_yang_graph_02.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_frequency_dependent_ratio_edit.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_frequency_dependent_ratio_graph.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_reaper_key_input_routing.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_reaper_sidechain_setup.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_logic_pro_x_sidechain_setup.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/nova_ge_pro_tools_sidechain_setup.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/settings_menu.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_01.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_02.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_03.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_04.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_05.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_06.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/kotelnikov_ge_appendix_a_07.png', None], dtype=object) array(['https://docs.tokyodawn.net/wp-content/uploads/2020/03/vladimir_aleksandrovich_kotelnikov_portrait.jpg', None], dtype=object) ]
docs.tokyodawn.net
Sources Sources are the origin of data and events for ingestion into TriggerMesh. An event source often acts as a gateway between an external service and the Bridge. Sources may be irregular events, periodic data updates, batch processes, or even continuous event streams. Examples Examples of Sources include GitHub, IBM DB2 and Oracle databases, Salesforce, ZenDesk, or any number of cloud-based events such as Amazon S3, Azure Activity Log, or Google Cloud Audit Logs. All sources available can be found by listing the CRDs like so: $ kubectl get crd -o jsonpath='{.items[?(@.spec.group=="sources.triggermesh.io")].spec.names.kind}' AWSCloudWatchLogsSource AWSCloudWatchSource AWSCodeCommitSource AWSCognitoIdentitySource AWSCognitoUserPoolSource AWSDynamoDBSource AWSKinesisSource AWSPerformanceInsightsSource AWSS3Source AWSSNSSource AWSSQSSource AzureActivityLogsSource AzureBlobStorageSource AzureEventGridSource AzureEventHubSource AzureIOTHubSource AzureQueueStorageSource AzureServiceBusQueueSource GoogleCloudAuditLogsSource GoogleCloudBillingSource GoogleCloudPubSubSource GoogleCloudRepositoriesSource GoogleCloudStorageSource HTTPPollerSource OCIMetricsSource SalesforceSource SlackSource TwilioSource WebhookSource ZendeskSource There is an example of Creating a Source available under Guides. API Reference All TriggerMesh-provided sources are listed and documented in the API Reference. Knative Event Sources There are a number of additional event sources provided by Knative. Specifications The specification of each source is available through kubectl explain. For example: kubectl explain googlecloudstoragesource.spec KIND: GoogleCloudStorageSource VERSION: sources.triggermesh.io/v1alpha1 RESOURCE: spec <Object> DESCRIPTION: Desired state of the event source. FIELDS: bucket <string> -required- Name of the Cloud Storage bucket to receive change notifications from. Must meet the naming requirements described at eventTypes <[]string> Types of events to receive change notifications for. Accepted values are listed at. All types are selected when this attribute is not set. pubsub <Object> -required- Attributes related to the configuration of Pub/Sub resources associated with the Cloud Storage bucket. serviceAccountKey <Object> -required- Service account key used to authenticate the event source and allow it to interact with Google Cloud APIs. Only the JSON format is supported. sink <Object> -required- The destination of events received via change notifications.
https://docs.triggermesh.io/concepts/sources/
2022-09-25T02:46:51
CC-MAIN-2022-40
1664030334332.96
[]
docs.triggermesh.io
Rename-Brokerappassignmentpolicyrule¶ Renames an application rule in the site's assignment policy. Syntax¶¶¶ - New-BrokerAppAssignmentPolicyRule - Get-BrokerAppAssignmentPolicyRule - Set-BrokerAppAssignmentPolicyRule - Remove-BrokerAppAssignmentPolicyRule Parameters¶ Input Type¶ Citrix.Broker.Admin.Sdk.Appassignmentpolicyrule¶ The application rule in the assignment policy being renamed. Return Values¶ None Or Citrix.Broker.Admin.Sdk.Appassignmentpolicyrule¶ This cmdlet does not generate any output, unless you use the PassThru parameter, in which case it generates a Citrix.Broker.Admin.SDK.AppAssignmentPolicyRule object. Examples¶ Example 1¶ C:\PS> Rename-BrokerAppAssignmentPolicyRule 'Offshore' -NewName 'Remote Workers' Description¶ Renames the application rule in the assignment policy called Offshore to Remote Workers. The new name of the rule must be unique in the assignment policy.
https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/1808/Broker/Rename-BrokerAppAssignmentPolicyRule/
2022-09-25T01:35:32
CC-MAIN-2022-40
1664030334332.96
[]
developer-docs.citrix.com
HUMAnN2 Description HUMAnN is a pipeline for efficiently and accurately profiling the presence/absence and abundance of microbial pathways in a community from metagenomic or metatranscriptomic sequencing data. This process (functional profiling) aims to describe the metabolic potential of a microbial community and its members. License Free to use and open source under MIT License. Availability Versions available in Puhti * HUMAnN 2.8.8 * MetaPhlAn 3.0.7 * PhyloPhlAn 3.0.60 * StrainPhlAn 3.0 Usage In Puhti, HUMAnN2 is installed as part of gcc 9.1.0 compatible biopythontools module. To activate it, run commands: module load biokit module load biopythontools/3.9.1-gcc_9.1.0 humann2 More information Last update: April 13, 2021
https://docs.csc.fi/apps/humann/
2022-09-25T02:01:48
CC-MAIN-2022-40
1664030334332.96
[]
docs.csc.fi
Search… The Moment Knowledge Base Moment My Pages Tasks Customers Offers Projects Forecast How to setup hours approval in Moment Project Roles Projects Main Page Template projects Activities Map Project plan Project Plan features Budgeted costs and revenue in Moment's Project plan This section explains the main features of the project plan and its use in practice Project plan module The project plan was designed to facilitate the resource planning of a project by giving the project leader the ability to keep track of estimated hours on a project, and the hours that have actually been spent on certain predefined tasks and activities. It allows the project leader to continuously keep track of timesheet registrations and how these hours compare to estimated hours on a project. Planning in practice The planning of a project in Moment can be initiated by choosing an existing project and then creating an activity. When an activity is created, the user can enter the amount of estimated hours that will be spent on the specific activity from the project planner. By clicking on the gear icon on the right corner in the Project Plan tab, the user will be able to click on the option to “show estimated hours on activities”. By selecting this option, the user will be able to see the estimated hours in the column “estimated” as they compare to the timesheet input from each co-worker. At the same time, the user will be able to delegate hours as they see fit in the planning schedule. By clicking on the same gear icon on the right, the user can choose to see “earlier reserved hours” (marked in red below): If this setting is selected, the available hours for the specific coworker will be shown in the project plan. This gives the project manager the opportunity to avoid overbooking. See the screenshot below: ‘ In the project plan schedule, it is possible to assign hours for each co-worker. The hour allocation is decided on a weekly basis, with each square representing one week. Each week is represented by its number in the corresponding year. To see the exact dates for each corresponding week, just hover the mouse pointer over the week number, a tooltip will appear that shows the date for the beginning and the ending of the week. Once the user has chosen the respective squares, the amount of hours can be defined by just writing the number in the square that represents the week of your choice. The total amount of hours allocated from the Project Plan interface will be shown in the column Forecast to the right of the respective row in the Project Plan interface. The amount of hours allocated to the activity from the project plan interface will in turn be compared to the “estimated” column just next to it. If you have clicked on the option to see deviations in the project plan interface, these will be shown to the right of it in the next row. These planned hours will be compared to the hours that the co-worker has registered in its timesheet, in turn these hours will be checked against the estimated hours. If the co-worker has registered less hours than have been estimated for a certain activity, Moment will not register them as a deviation, hence these will not be shown in red once you hover the mouse pointer over the column that shows. Forecasted and estimated hours, along with deviations If a co-worker in the project registers more hours than what has been allocated by the project manager, these hours will show both in the project plan screen as red numbers, and in the co-workers hour-registration window. The coworker has been alloted 8 hours in a week In this example, the co-worker has been allocated 8 hours to perform a certain activity in week 8. If, for instance, the co-worker logs in more hours in that specific week than the amount that has been allocated. This will show in red both in the project planner and the hour registration window. As seen in the screenshots below. From the hour-registration window The coworker has logged more hours than what has been allocated. From the project-plan window Deviation in the project plan In the example above, it is clear the the amount of hours logged is greater than the amount allocated. This will be shown as a deviation both in the project-plan and in the hour registration window. It is worth mentioning that in the project plan, the amount of allocated hours is replaced by the actual amount of hours logged for that activity if the coworker logs more hours than originally allocated. If the co-workers register more hours than have been estimated for that activity they will get a prompt in their timesheet registration that they will now be registering more hours than have been estimated for that activity. Depending on the settings chosen, the co-worker will either get a warning that the hours registered exceed the allocated amount and be allowed to register these, or get a prompt warning and not be allowed to register. The following settings are found in Setup > settings > projects: The rest of the settings in the screenshot above, show different options such as sending an e-mail when hours are exceeded, to show planned hours in the timesheet interface itself, or to add any projects in the timesheet if any registrations are found in the selected period. Reservations (project > reservations). Map Project Plan features Last modified 1yr ago Copy link Outline Project plan module Planning in practice
https://docs.moment.team/help/projects/project-plan-and-reservations
2022-09-25T02:44:06
CC-MAIN-2022-40
1664030334332.96
[]
docs.moment.team
Collect Linux data 🔗 The Splunk Distribution of OpenTelemetry Collector is a package that provides integrated collection and forwarding for all telemetry types for the Linux platform. You can deploy the Collector to gather telemetry for Splunk Infrastructure Monitoring, Splunk APM, or Splunk Log Observer. Supported versions 🔗 The Collector supports the following Linux distributions and versions: Amazon Linux: 2 CentOS / Red Hat / Oracle: 7, 8 Debian: 8, 9, 10 Ubuntu: 16.04, 18.04, 20.04 Splunk Observability Cloud offers a guided setup to install the Collector: Log in to Splunk Observability Cloud. In the left navigation menu, select + Add Integration to open the Integrate Your Data page., then select In the integration filter menu, select All. Select Linux. Select Add Connection. The integration guided setup appears. Follow the steps in the guided setup. For advanced installation instructions, see Linux. Next steps 🔗 Configure the Collector on Linux. See Advanced configurations for Linux. Learn about the Collector commands. See Splunk Distribution of OpenTelemetry Collector commands reference. Troubleshoot Collector issues. See Troubleshoot issues when collecting data.
https://docs.signalfx.com/en/latest/gdi/get-data-in/compute/linux.html
2022-09-25T01:33:24
CC-MAIN-2022-40
1664030334332.96
[]
docs.signalfx.com
pygram11.fix1d# - pygram11.fix1d(x, bins=10, range=None, weights=None, density=False, flow=False, cons_var=False)[source]# Histogram data with fixed (uniform) bin widths. - Parameters x (numpy.ndarray) – Data to histogram. bins (int) – The number of bins. range ((float, float), optional) – The minimum and maximum of the histogram axis. If None, min and max of x will be used. x and weights have incompatible shapes. TypeError – If x or weights are unsupported types - Returns numpy.ndarray– The resulting histogram bin counts. numpy.ndarray, optional – The standard error of each bin count, \(\sqrt{\sum_i w_i^2}\). The return is Noneif weights are not used. If cons_var is True, the variances are returned. Examples A histogram of x with 20 bins between 0 and 100: >>> rng = np.random.default_rng(123) >>> x = rng.uniform(0, 100, size=(100,)) >>> h, __ = fix1d(x, bins=20, range=(0, 100)) When weights are absent the second return is None. The same data, now histogrammed with weights and over/underflow included: >>> rng = np.random.default_rng(123) >>> x = rng.uniform(0, 100, size=(100,)) >>> w = rng.uniform(0.1, 0.9, x.shape[0]) >>> h, stderr = fix1d(x, bins=20, range=(0, 100), weights=w, flow=True)
https://pygram11.readthedocs.io/en/0.12.2/api/fix1d.html
2022-09-25T03:04:59
CC-MAIN-2022-40
1664030334332.96
[]
pygram11.readthedocs.io
Instadapp on Polygon Deployed addressesDeployed addresses Below are all the addresses of Core Contracts of the DSL ecosystem on Polygon: List of other Instadapp related address (non core to DSAs): Networks and Underlying meaningsNetworks and Underlying meanings - Index: This is the Main Contract for all the DeFi Smart Accounts. Used for creating a new DeFi Smart Account of a user and to run a cast function in the new smart account. - InstaList: Maintains a registry of all the DeFi Smart Account users using a Linked List. Using the user’s address, a smart account Id is created which is later mapped to get a smart account address. With this address, an account link is created which is utilised to add and remove accounts from the LinkedList. - InstaAccounts: It’s the DeFi Smart Account Wallet. All smart accounts that are created are a clone of this contract. - InstaConnectors: Holds a registry of all the Connectors associated with InstaDapp. An array of all the connectors is maintained using their address. - InstaMemory: All the data (bytes, uint, address and Storage Id) for the cast function are stored in this contract..
https://docs.instadapp.io/networks/polygon
2022-09-25T02:39:06
CC-MAIN-2022-40
1664030334332.96
[]
docs.instadapp.io